Every feature in a product has a cost: the engineering time to build it, the design time to specify it, the QA time to test it, the ongoing maintenance to keep it working, and the cognitive overhead it adds to the product for every user who encounters it. Most teams calculate whether a feature is worth building by asking whether users want it. The more useful question is whether the value it creates justifies all of those costs — and that requires a different calculation.
Feature ROI
Feature ROI is the ratio of value created by a feature to the cost of building and maintaining it. Value can be measured in several ways depending on the feature type: revenue generated (conversion improvements, upsell triggers), revenue protected (retention improvements, competitive defence), cost reduced (support deflection, automation), or strategic value (market positioning, enterprise requirement). Cost includes development time, ongoing maintenance, and the opportunity cost of the engineering capacity not spent on something else.
A feature that takes three months of engineering time to build and improves trial conversion by 0.5 percentage points needs to be evaluated against what three months of engineering could have produced on alternative priorities. If the conversion improvement adds £8,000/month in net new revenue and the engineering cost was £45,000, the payback period is approximately 5.6 months — a strong return. If the improvement adds £800/month, payback is 56 months — a weak return that was probably not the best use of three months of engineering.
The Cost per Feature Calculator structures this calculation. Enter the development cost (engineering hours × blended hourly cost), estimated maintenance overhead, and the measurable value the feature is expected to generate, and it calculates payback period, annual ROI, and the break-even point. Running this calculation before committing to features — not after — changes the prioritisation decisions significantly.
Cost Breakdown
Accurate feature cost calculation requires including all relevant cost components, not just the initial development time.
Development cost: Engineering hours × blended cost per hour (salary, benefits, overhead). For a product team where the fully loaded engineer cost is £85/hour, a feature requiring 200 hours costs £17,000 in development alone. This is the number most teams use as the feature cost. It is also incomplete.
Design cost: Product design and UX work precedes and accompanies development. A feature requiring two weeks of design time at a fully loaded designer cost of £60/hour adds approximately £4,800 to the total.
Testing and QA: QA is often underestimated in feature cost calculations. Complex features touching multiple system components require disproportionate testing time. A feature taking 200 hours to develop might require 60 to 80 hours of QA.
Ongoing maintenance: Every feature that ships adds to the maintenance burden of the codebase. A conservative estimate is 15 to 20% of initial development cost per year in ongoing maintenance time — bug fixes, compatibility updates, edge case handling, and refactoring as the surrounding code evolves. A feature costing £17,000 to build costs approximately £2,500 to £3,400 per year to maintain indefinitely.
Opportunity cost: The hardest cost to calculate but the most significant. The true cost of a feature is not just its direct cost — it is the alternative value that could have been created with the same team capacity. A three-month feature that produces modest value prevented three months of work on a potentially high-value alternative.
Decision Making
Feature investment decisions are improved by three specific practices:
Score features against a consistent framework before committing: Impact (what measurable outcome does this feature improve?), effort (what does it cost to build and maintain?), and confidence (how certain are we of the impact estimate?). A feature with high estimated impact and low confidence is a hypothesis to test cheaply before committing full build resources. A feature with high impact and high confidence justifies full investment.
Validate impact estimates with data: Most impact estimates are guesses dressed as forecasts. Conversion improvement estimates are wrong more often than they are right. Where possible, validate with small experiments before committing full development capacity — A/B tests on feature concepts, landing page tests for proposed premium features, prototype testing with target users.
Track actual ROI after launch: Features that ship rarely get retrospective evaluation against their original business case. Building a post-launch review into the feature development process — three months after release, compare actual impact against the original estimate — produces calibration data that improves future feature ROI estimates. Teams that do this consistently make progressively better investment decisions.

