Reading Time: 6 minutes

In scientific and engineering software, most frustration doesn’t come from hard problems — it comes from unclear problem statements. A ticket that “sounds wrong” might actually describe a missing capability. A request for a “small improvement” might be masking a defect that corrupts results. When teams misclassify tickets, they waste time: developers investigate phantom bugs, researchers wait on features that were never scoped, and priorities drift.

This guide helps you quickly decide whether something is a bug report or a feature request, and shows how to write each one so the team can act on it. The goal is simple: fewer back-and-forth comments, faster triage, and fewer surprises in releases.

Why the distinction matters

Bug reports and feature requests are handled differently across most workflows (Trac, Jira, GitHub Issues, Redmine, etc.). They have different urgency, different acceptance criteria, and different ways to test and close the loop.

  • A bug report typically has an expectation of correctness: the software violates its own contract, documentation, or established behavior.
  • A feature request asks for new behavior: something the software does not currently promise to do, even if it would be useful.

When you label the ticket correctly, triage becomes easier: maintainers can reproduce, prioritize, and assign work without guessing what “should” happen.

The core definitions

What is a bug report?

A bug is when the system behaves incorrectly compared to an agreed reference point. That reference point can be:

  • Documentation or a published interface (API contract)
  • Previous stable behavior (a regression)
  • Scientific correctness (e.g., conservation laws, expected invariants, unit consistency)
  • Clearly stated requirements (including tests or specs)

In short: a bug report describes an error that should be fixed to restore correctness.

What is a feature request?

A feature request proposes a capability or improvement that would make the system more useful, flexible, or efficient, but is not required for correctness of current behavior. Examples include:

  • Supporting a new boundary condition type
  • Adding an exporter for a new file format
  • Improving performance beyond current targets
  • Adding UI/CLI options that don’t exist yet

In short: a feature request describes something new the system should do.

Quick decision checklist

If you only remember one rule, use this:

  • If the software breaks a promise, it’s a bug.
  • If you want a new promise, it’s a feature.

Ask yourself these questions:

  1. Did it ever work before in the same scenario? If yes, likely a bug (possibly a regression).
  2. Is there documentation that states the behavior should work? If yes, bug.
  3. Is the behavior ambiguous and you’re proposing what it should be? Likely a feature (or a spec clarification first).
  4. Is the problem that you can’t do something at all, but nothing is “wrong” with existing outputs? Likely a feature.
  5. Does it produce incorrect results, crash, corrupt data, or violate physical constraints? Bug.

Side-by-side comparison

Aspect Bug Report Feature Request
Meaning Something is wrong compared to expected behavior Something new or improved is desired
Reference point Docs, tests, prior versions, correctness criteria User needs, research goals, usability improvements
Typical evidence Reproduction steps, logs, wrong output, crash trace Use case, benefit, proposed behavior, acceptance criteria
Priority drivers Severity, scope, frequency, risk to results Impact, demand, strategic roadmap, effort
How it is “done” Verified fix; test passes; regression prevented Implemented spec; documented; usable end-to-end

Examples in scientific computing

Example 1: Wrong results due to units mismatch

You run a simulation and notice that a parameter documented as “meters” is treated as “millimeters,” shifting results by 1000×. If documentation and code disagree, and outputs are incorrect for documented usage, that’s a bug report. Your ticket should include the parameter, where it’s documented, a minimal case, and evidence of mismatch.

Example 2: Need a new PDE term or coupling

You want to add an electromigration term to an existing transport model. The current code works as designed; it just doesn’t include that physics. That’s a feature request. Your ticket should explain the governing equation, expected inputs, and how you would validate it (benchmarks or analytical limits).

Example 3: “It’s too slow”

Performance can be either. If the code used to run in 5 minutes and now takes 50 minutes on the same setup, that’s a bug (a performance regression). If the code has always taken 50 minutes and you want it to take 5, that’s a feature request (optimization work), unless the slowness comes from an unintended behavior (like a solver stuck due to a convergence issue).

Example 4: Confusing error message

A confusing error message is typically a feature request (improve UX), unless the message is factually incorrect or masks the real error in a way that prevents diagnosis. In many teams, “Improve error message” is tracked as an enhancement, not a bug.

Common misclassifications and how to fix them

Misclassification: “It crashes” (but only with invalid input)

If the software crashes when given invalid input, it might be a bug if the program should fail gracefully (clear error, no corrupted files). If crashing is expected because the input is outside supported constraints and the tool already documents this, it could be an enhancement request to improve validation and messaging.

Misclassification: “The output looks wrong” (but expectations are unclear)

If the ticket doesn’t define what “right” means, maintainers cannot decide bug vs. feature. In that case, write the ticket as a question plus a proposed expectation, and attach evidence. Often the best first step is: “Clarify expected behavior” — then refile as bug or feature once confirmed.

Misclassification: “Please add option X” (but code already supports it)

This is neither a feature nor a bug in core behavior — it may be a documentation gap. If the capability exists but is hard to discover, create a doc ticket (or an enhancement focused on usability).

How to write a high-quality bug report

A good bug report makes it easy to reproduce, diagnose, and verify the fix.

Bug report template

  • Summary: one sentence describing the incorrect behavior
  • Environment: OS, Python/C++ version, package version/commit, MPI/solver versions if relevant
  • Steps to reproduce: minimal steps, minimal input files, minimal code snippet
  • Expected result: what should happen, and why (docs/tests/previous behavior)
  • Actual result: what happens instead (logs, stack trace, screenshots, output diff)
  • Impact: severity (crash, wrong science, minor UI), frequency, workaround if any

Bug report example (short)

Summary: Diffusion example fails with Dirichlet BC on 3D grid.

  • Expected: Simulation runs and produces stable field values as in 2D.
  • Actual: Solver diverges after N steps; values become NaN.
  • Evidence: Provide minimal script, parameter values, and log output.

How to write a strong feature request

A strong feature request reads like a mini design note: it explains why the capability matters, what “done” looks like, and how you will verify it.

Feature request template

  • Problem statement: what you cannot do today
  • Use case: who needs it and why (research workflow, teaching module, production pipeline)
  • Proposed solution: high-level behavior, UI/API changes, defaults
  • Acceptance criteria: what must be true to consider it complete
  • Validation: how to test (benchmarks, unit tests, analytical solutions, regression suite)
  • Alternatives considered: workarounds or other approaches

Feature request example (short)

Request: Add an option to export simulation state to a standardized visualization format at configurable intervals.

  • Use case: Large runs on HPC where intermediate visualization is needed for monitoring.
  • Acceptance: Export works for 2D/3D grids; includes metadata (time step, units); no major slowdown.
  • Validation: Compare exported fields with in-memory arrays; ensure reload reproduces state within tolerance.

When one ticket should become two

Many real-world issues mix bugs and features. Splitting them often accelerates progress.

  • If there is a crash (bug) and also a desire for a nicer message (feature), file two tickets.
  • If there is a correctness issue (bug) and also a request to support a new regime, separate “fix current behavior” from “extend capability.”
  • If the request is “make it faster” but you suspect a regression, file a bug for the regression and a feature for further optimization.

Splitting tickets lets teams close the bug quickly and schedule the enhancement sensibly.

Triage tips for maintainers and leads

Use labels and a short decision comment

When a ticket arrives ambiguous, add a short comment that locks the classification:

  • “Classifying as bug because behavior contradicts documentation section X.”
  • “Classifying as feature request because this adds new solver capability not currently supported.”

Require minimal acceptance criteria for features

Feature requests fail when “done” is unclear. Ask for acceptance criteria early: what output, what interface, what tests, what example. A feature without acceptance criteria is effectively a wish list item.

Turn recurring questions into documentation tasks

If multiple “feature requests” are actually requests for guidance (“How do I…?”), capture them as docs improvements. This is especially common in scientific libraries where capabilities exist but are not obvious.

Practical templates you can copy into tickets

One-liner classifier

Use this as the first line of your ticket description:

  • Bug: “Observed behavior contradicts expected behavior defined by [docs/test/previous version].”
  • Feature: “New capability needed to support [use case], not currently available.”

Acceptance criteria starter set for features

  • API: New option/parameter documented with defaults
  • Example: Minimal working example included in docs/examples
  • Tests: At least one automated test covers main behavior
  • Backward compatibility: Existing scripts continue to run unchanged (or migration steps documented)

Conclusion

Bug reports restore trust; feature requests expand capability. Both are essential in scientific software, but they succeed only when they are written with the right intent and evidence. If you tie bugs to a reference point and tie features to a clear use case with acceptance criteria, you’ll reduce triage friction, accelerate development, and make releases more predictable — which ultimately improves research velocity and result reliability.