Questions to Ask Before Approving Any Digital Initiative

Process & Digitalization

9 min read

Most digital initiatives are approved based on build cost and feature lists. The questions that actually determine success are rarely asked. These oversights create systems that cost more to run than to build, deliver features nobody uses, and become liabilities rather than assets.

Here are the questions that should be mandatory before any digital initiative is approved.

Questions About Validation

"Which requirements are validated with user behavior, not just opinions?"

This is the most important question, and it's almost never asked.

Stakeholders have opinions. Users say yes in surveys. Focus groups express enthusiasm. None of this predicts actual adoption.

What to look for:

  • Prototype testing with measurable engagement
  • Pilot users who demonstrate actual usage patterns
  • Commitments (signups, deposits, pre-orders) not just interest
  • Data showing behavioral validation, not just stated preference

Red flag: "We conducted extensive stakeholder interviews" without any behavioral validation.

"What happens if adoption is 50% lower than projected?"

Optimism is built into every digital initiative. Projections assume success. Budgets assume adoption.

What to look for:

  • Realistic scenarios for underperformance
  • Cost implications of lower-than-expected usage
  • Decision points for course correction
  • Mechanisms to reduce scope if validation fails

Red flag: No contingency planning. Only optimistic scenarios discussed.

"How will we measure success, and when?"

"Success" is often defined as launch completion, not business outcomes.

What to look for:

  • Specific metrics with targets
  • Timeline for measurement
  • Decision triggers based on metrics
  • Accountability for outcomes, not just delivery

Red flag: Success defined as "on time, on budget delivery" without adoption or value metrics.

Questions About Total Cost

"What is the 5-year total cost of ownership?"

Build cost is the visible number. It's often the smaller number.

What to look for:

  • Annual operational costs (infrastructure, support, maintenance)
  • Staffing requirements post-launch
  • Integration maintenance costs
  • Enhancement and evolution costs
  • Eventual replacement or modernization costs

Red flag: Only build cost discussed. No operational cost model.

"What does the operational cost model assume about incident volume?"

Incident volume drives support costs, on-call burden, and team capacity.

What to look for:

  • Incident volume projections based on comparable systems
  • Resolution time assumptions
  • Staffing model for incident response
  • Escalation and on-call implications

Red flag: No incident volume projections. Assumption that launch equals stability.

"How does operational cost change if we build half the features?"

This question reveals whether features are truly essential or simply included because stakeholders requested them.

What to look for:

  • Feature-level cost attribution
  • Clear priority tiers with cost implications
  • Willingness to reduce scope for operational efficiency
  • Understanding of operational cost drivers

Red flag: All features treated as equally essential. No scope flexibility.

Questions About Architecture

"What is the architecture optimized for: flexibility or simplicity?"

These are different design choices with different cost implications.

Flexibility sounds good. But flexible architectures are complex. Complex architectures are expensive to operate.

What to look for:

  • Clear rationale for architectural choices
  • Understanding of operational implications
  • Trade-off analysis between flexibility and simplicity
  • Evidence that complexity is justified by validated requirements

Red flag: Architecture designed for hypothetical future needs rather than validated current needs.

"What happens when one component fails?"

System resilience is often assumed, not designed.

What to look for:

  • Failure mode analysis
  • Graceful degradation design
  • Recovery time objectives
  • Dependency mapping and isolation

Red flag: Assumption that components won't fail. No resilience discussion.

"How will we debug production issues?"

Debugging in production is where architectural complexity becomes operational cost.

What to look for:

  • Observability strategy
  • Tracing across components
  • Log aggregation and analysis
  • Incident diagnosis workflows

Red flag: "We'll add monitoring later." Observability as afterthought.

Questions About Vendor/Partner Selection

"How does this vendor validate requirements before development?"

Vendors are incentivized to build what clients ask for. The best vendors push back.

What to look for:

  • Validation methodology
  • Examples of requirements they pushed back on
  • Balance between client requests and recommendations
  • Operational outcomes of previous projects

Red flag: Vendor agrees to everything without pushback. Pure order-taking.

"What does the vendor's support model look like post-launch?"

The relationship doesn't end at launch. It often intensifies.

What to look for:

  • Knowledge transfer plan
  • Documentation standards
  • Ongoing support terms
  • Transition to internal teams

Red flag: No post-launch plan. Assumption that the vendor walks away.

"Can we talk to IT operations at the vendor's previous clients?"

References often come from project sponsors. IT operations has a different perspective.

What to look for:

  • Operational stability of delivered systems
  • Incident volume and resolution experience
  • Documentation and knowledge transfer quality
  • Long-term maintainability

Red flag: Only project manager references. No operations perspective.

Questions About Scope Management

"What features would we cut if budget were reduced by 30%?"

This question reveals priority thinking and scope discipline.

What to look for:

  • Clear feature prioritization
  • Willingness to make cuts
  • Understanding of core vs. nice-to-have
  • Criteria for prioritization decisions

Red flag: Everything is "essential." No prioritization possible.

"What is the minimum viable version that proves value?"

Minimum viable is often lip service. The "minimum" grows until it's comprehensive.

What to look for:

  • Truly minimal scope definition
  • Clear value proposition for minimum version
  • Validation plan before expansion
  • Discipline to resist scope creep

Red flag: "Minimum viable" includes most originally requested features.

"Who decides scope changes, and what's the decision process?"

Scope creep is often death by a thousand approvals.

What to look for:

  • Clear decision authority
  • Impact assessment for changes
  • Operational cost consideration in change decisions
  • Discipline to say no

Red flag: Any stakeholder can add scope. No centralized authority.

Questions About Operations Involvement

"When does IT operations get involved in this initiative?"

Operations often inherits systems without input into their design.

What to look for:

  • Operations involvement during requirements
  • Operational sustainability as design input
  • Staffing and support planning during build
  • Knowledge transfer and transition planning

Red flag: Operations finds out about the system at handover.

"Who is accountable for operational costs?"

Project sponsors are rarely accountable for operational outcomes.

What to look for:

  • Clear operational cost ownership
  • Accountability beyond project completion
  • Incentive alignment with operational efficiency
  • Long-term ownership clarity

Red flag: Project sponsor accountability ends at launch.

Using These Questions

Not every question applies to every initiative. But patterns emerge.

If the answer to multiple questions is unclear or concerning, the initiative isn't ready for approval.

The questions that seem difficult to answer are often the most important. If validation methodology is unclear, validation isn't happening. If operational cost is uncertain, it's probably underestimated. If scope is inflexible, it's probably inflated.

Better to ask uncomfortable questions before approval than to discover the answers in production.

At Topcode, we welcome these questions. When you work with us on process digitalization, we've already thought through validation, operational sustainability, scope discipline, and total cost of ownership. That's what makes the difference between systems that become assets and systems that become liabilities.

The right questions, asked early, prevent problems that no amount of operational excellence can fix later.