How to structure RTM integration criteria and rollout to deliver execution reliability
This playbook translates the reality of RTM execution into concrete, field-tested criteria. We’re focusing on execution reliability, not software hype, so you can defend numbers, reduce disputes, and move quickly from test to rollout. Use the lenses to drive pilots, set clear acceptance tests, and evaluate vendor risk with practical checks that map to distributor behavior, field workflows, and cash-to-order cycles.
Explore Further
Operational Framework & FAQ
Executive integration criteria: governance, risk, and API-first alignment
Executive framing of integration criteria, risk controls, and vendor governance. Aligns API-first architecture with ERP and business processes to deliver rapid value.
At an executive level, how should we frame our technical and integration requirements so that API-first design, ERP connectivity, offline-first mobility, and master data quality are all aligned to a 30–60 day go-live, instead of turning into a long IT project?
C1091 Executive framing of integration criteria — For a mid-sized FMCG manufacturer in India evaluating CPG route-to-market management systems for distributor operations and retail execution, how should the technical and integration criteria be framed at an executive level so that API-first architecture, ERP connectivity, offline-first mobile design, and master data management are all aligned with a 30–60 day time-to-value expectation rather than a long IT program?
An executive-level RTM brief for a mid-sized FMCG in India should frame technical and integration criteria around fast, low-risk value delivery: API-first connectivity to ERP, offline-first mobile for field and distributors, and basic master data alignment, all scoped to support a 30–60 day pilot rather than a multi-year program. The focus should be on a small, well-instrumented rollout that proves data reliability and execution uplift quickly.
At this level, leaders can ask for pre-built ERP connectors for primary and secondary sales, standardized outlet and SKU master structures, and documented APIs for orders, invoices, and claims. The RTM mobile and DMS interfaces should be evaluated on offline resilience, app performance on low-cost Android devices, and minimal-click workflows for order capture and claims. Master data expectations—such as outlet ID structure and SKU hierarchies—should be agreed upfront, with a clear plan for initial cleaning and periodic synchronization.
To keep time-to-value short, criteria can explicitly cap custom development for the pilot: rely on out-of-the-box scheme types, standard dashboards for numeric distribution and fill rate, and simple integration paths to the existing ERP and tax setups. Executives should insist that all technical commitments—API endpoints, data mappings, and offline behavior—are demonstrated in a sandbox or limited geography before broad rollout.
From a CIO perspective, what technical integration criteria should we prioritize to be sure we’re choosing a safe vendor, especially around ERP connectors, API governance, and reliable data reconciliation?
C1092 CIO lens on safe integration choice — When a large consumer goods company in Southeast Asia is selecting a CPG route-to-market platform to digitize distributor management and sales force automation, which technical integration criteria should the CIO prioritize to ensure the vendor is a safe choice rather than a risky startup, particularly with respect to ERP connectors, API governance, and data reconciliation reliability?
For a large consumer goods company in Southeast Asia, the CIO should prioritize integration criteria that prove the RTM vendor is operationally mature: hardened ERP connectors, clear API governance, and reliable data reconciliation between RTM and finance systems. The objective is architectural safety and predictable operations, not cutting-edge experimentation.
Technically, this means validating that the vendor supports robust, documented connectors for common ERPs (e.g., SAP, Oracle), including standardized mappings for customer, material, pricing, tax, and invoice objects. The RTM platform should expose stable, versioned APIs with rate limits, authentication standards, and monitoring hooks that slot into existing middleware or API gateways. Data reconciliation reliability is evidenced through near real-time sync status dashboards, retry and error-handling mechanisms, and reconciliation reports that tie RTM transactions back to ERP postings at document ID level.
The CIO should also scrutinize data governance: how master data is synchronized, how conflicts are resolved, and how audit trails are preserved across systems. Vendors lacking production-grade DevOps practices, integration SLAs, or reference customers with similar stacks and regulatory environments are higher risk, regardless of feature breadth.
How should Procurement ask vendors to package ERP connectors, API usage, and data residency options into simple, comparable bundles instead of long, confusing technical SKU lists?
C1096 Procurement simplification of technical scope — When a beverage CPG company in India is shortlisting vendors for route-to-market digitization, what technical and integration criteria should Procurement define so that ERP connectors, API limits, and data residency options are packaged into simple, comparable bundles instead of long lists of technical SKUs?
When shortlisting RTM vendors, Procurement should define technical and integration criteria as simple, comparable bundles rather than granular technical SKUs. For an Indian beverage company, the bundles should clearly cover ERP connectors, API capabilities, and data residency options aligned with compliance and time-to-value needs.
A practical approach is to create 3–4 labeled bundles. For example, an ERP Integration bundle that includes supported ERP versions, standard objects covered (customers, SKUs, pricing, invoices, schemes), deployment effort, and reconciliation features; an API & Extensibility bundle that specifies available REST endpoints, rate limits, authentication standards, and monitoring tools; and a Data Residency & Compliance bundle clarifying hosting locations, backups, access controls, and GST/e-invoicing compatibility. Each bundle is scored on readiness (pre-built vs. custom), implementation timelines, and reference customers.
Procurement can then compare vendors on these bundles side by side, linking commercial terms—such as fixed-price integration or SLA penalties—to bundle performance rather than to individual technical line items. This framing helps non-technical stakeholders, including Finance and Sales, understand integration risk without wading through low-level configuration details.
Given we have multiple ERPs and regional DMS tools, what integration approach should our CIO push for so your platform becomes the single source of truth without creating a brittle, hard-to-change monolith?
C1099 Designing RTM as single source of truth — For a large home care CPG enterprise in Southeast Asia with multiple ERP instances and regional DMS tools, what technical integration strategy should the CIO adopt so that the selected route-to-market platform can serve as a single source of truth across heterogeneous systems without locking the company into a brittle, monolithic architecture?
For a large home care CPG in Southeast Asia with multiple ERPs and regional DMS tools, the CIO should adopt an integration strategy that positions the RTM platform as a governed data and process hub without becoming a brittle monolith. The architecture should emphasize API-first connectivity, decoupled services, and a clear master data strategy.
Practically, this means using standardized APIs or integration middleware to connect each ERP and DMS into the RTM platform, with well-defined contracts for customer, outlet, SKU, pricing, and transactional data. The RTM system can serve as the operational Single Source of Truth for secondary sales and trade schemes while publishing reconciled, normalized datasets back to corporate data warehouses and ERPs. Microservices or modular components—for DMS, SFA, TPM, and analytics—allow individual capabilities to evolve without forcing a wholesale platform replacement.
The CIO should insist on data lineage and governance tools within the RTM stack: clear ownership of master data, version control for integration mappings, and audit trails for transformations. Avoiding tight, point-to-point couplings between RTM and each ERP instance preserves flexibility; instead, a hub-and-spoke or API gateway pattern, with documented SLAs and fallbacks, reduces lock-in and simplifies future system changes.
Contractually, what should Legal and Procurement insist on around data portability, API documentation, and exit support so we’re not locked in if we ever change RTM vendors?
C1106 Guardrails against RTM vendor lock-in — When a household products CPG company in India is negotiating with a route-to-market vendor, what contractual technical criteria should Legal and Procurement insist on regarding data portability, API documentation, and exit support to reduce the risk of vendor lock-in if the company decides to switch RTM platforms in the future?
Legal and Procurement should insist on explicit contractual clauses covering data portability, API documentation, and exit support so the company can switch RTM platforms without losing control of data or integrations. Vendor lock-in risk reduces significantly when data structures, interfaces, and migration assistance are contractually defined rather than implied.
Most lock-in issues arise when interfaces are proprietary, documentation is incomplete, or the vendor has no obligation to assist in extracting historical data and configurations in a usable format. This becomes acute in RTM where outlet masters, transaction histories, scheme data, and claim records are needed for audits and ongoing analytics even after a platform change.
Contracts should therefore require: rights to export all master and transactional data in documented, non-proprietary formats; up-to-date, versioned API documentation with change-notice periods; commitments to maintain backward compatibility or provide migration paths for APIs; and clearly scoped exit support services (data extraction, schema mapping, and knowledge transfer) with defined timelines and fees. It is also prudent to include clauses for data retention after termination, secure deletion options, and assurances that configuration artifacts (such as scheme templates or business rules) can be exported in human-readable or standardized forms.
Given our past ERP–DMS failures, what should our CIO ask you about similar integrations you’ve done with CPG companies like us, so we know your approach is tried and tested and not experimental?
C1107 Seeking peer proof of integration success — For a beverage CPG company in Southeast Asia that has previously suffered from failed ERP–DMS integrations, what due-diligence questions should the CIO ask an RTM vendor’s sales team about previous integrations in the same industry and revenue band to gain confidence that the technical approach is the safe standard rather than an experiment?
For a beverage CPG in Southeast Asia with previous ERP–DMS failures, the CIO should probe RTM vendors on concrete, like-for-like integration experience to confirm that the proposed approach is standard practice, not experimental. Confidence increases when the vendor can demonstrate repeatable patterns with similar ERPs, transaction volumes, and distributor structures.
Most integration failures come from untested assumptions about volume, error handling, and master-data alignment. Vendors that have only done low-scale or different-industry integrations often underestimate the complexity of CPG secondary sales and trade promotions, leading to performance or data-quality issues after go-live.
Due-diligence questions should ask: how many CPG integrations they have done with the same ERP version and region; typical daily transaction volumes and peak sync loads handled; specific problems encountered and how they were resolved; whether they use a standard connector or one-off custom builds; how they manage master-data alignment and error reconciliation; and whether reference customers of similar revenue size will confirm performance and stability. The CIO should also ask to see sample architecture diagrams, log screenshots, and monitoring dashboards from existing deployments.
To make RFP comparisons easier, how can Procurement and IT create a few standard technical bundles that group ERP connectors, DMS integration, offline mobile features, and analytics APIs into simple options across vendors?
C1108 Creating standard technical bundles for RFPs — When an Indian snacks CPG company wants to simplify RFP evaluation for route-to-market systems, how can Procurement and IT jointly define a small set of standard technical bundles that group ERP connectors, DMS integration, mobile offline capabilities, and analytics APIs into clear options that are easy to compare across vendors?
Procurement and IT can simplify RTM RFP evaluation by defining a few standard technical “bundles” that group ERP connectors, DMS integration, offline mobile capability, and analytics APIs into coherent options. Each bundle represents a clear integration and capability level that vendors must price and commit to, reducing apples-to-oranges comparisons.
This bundling approach aligns with how RTM programs actually roll out—typically moving from basic ERP connectivity and field order capture toward deeper distributor integration and advanced analytics. Without bundles, vendors often mix different components in custom ways, making cost and risk comparisons opaque.
Typical bundles might include: a foundational bundle with core ERP sync (masters and orders), basic offline mobile SFA, and standard reporting; an extended bundle adding full distributor DMS integration for secondary sales, claims flows, and richer offline workflows; and an advanced analytics bundle with open analytics APIs, data-extract options, and control-tower views. Each bundle should specify minimum API standards, supported data entities, offline behavior, and SLA expectations so vendors respond against well-defined, comparable technical scopes.
For AI-driven recommendations in coverage and orders, what should CSO and CIO jointly specify about data lineage, open APIs, and override logging so AI decisions are explainable and auditable?
C1110 Governance criteria for RTM AI integrations — When a personal care CPG manufacturer in India is evaluating route-to-market systems with embedded AI recommendations for outlet coverage and order suggestions, what technical criteria should the CSO and CIO jointly define around data lineage, API openness, and override logging to ensure that AI decisions remain explainable and auditable?
When evaluating RTM platforms with embedded AI for coverage and order suggestions, the CSO and CIO should define criteria that make AI decisions explainable, traceable, and overrideable through clear data lineage, open APIs, and robust logging. AI must sit on top of auditable sales data, not replace commercial judgment with opaque recommendations.
In practice, this means ensuring every AI suggestion—such as which outlets to visit or what quantities to push—can be tied back to specific input features like past sales, outlet type, scheme eligibility, or stock availability. A frequent failure mode is AI that behaves like a “black box,” undermining trust from sales leaders and Finance because they cannot understand or verify the drivers of recommendations.
Technical criteria should ask vendors to show: how input data (secondary sales, masters, schemes) flows into the AI layer and is versioned; how APIs expose both the recommendation and its key reasoning factors; how manual overrides by sales reps or managers are logged and can be analyzed later; and how model versions, training data periods, and parameter changes are recorded for audit and rollback. The CSO and CIO should also insist on mechanisms to disable or constrain AI in specific territories during pilots while still keeping full transparency of what the AI would have recommended.
If we’re replacing country-specific RTM systems with one global platform, what common technical rules should the global CIO set on APIs, MDM ownership, and integration SLAs so local teams buy into the new standard?
C1114 Global RTM standardization technical guardrails — When a large FMCG company in Southeast Asia is consolidating multiple country-specific route-to-market systems onto a single platform, what cross-cutting technical criteria should the global CIO enforce around API standards, master data ownership, and integration SLAs to ensure that local teams accept the new global standard?
When consolidating multiple country-specific RTM systems onto one platform, the global CIO should enforce cross-cutting criteria around API standards, master-data ownership, and integration SLAs that balance global consistency with local operability. The new standard must feel stable and predictable for local teams, not like a loss of control.
Consolidation programs fail when each country retains custom interfaces and master-data rules, or when global policies are imposed without reliable performance. Local teams will resist if integrations are brittle, if masters are frequently changed without notice, or if SLAs do not reflect their trading rhythms.
Key criteria include: a unified API specification for ERP, tax, and external data feeds, with clear versioning and deprecation policies; a global MDM model that defines who owns outlet, distributor, and SKU data, including change-approval workflows; and integration SLAs that cover uptime, latency, error-resolution times, and change windows. The CIO should also require a governance forum where global and local stakeholders review API changes, data-model updates, and SLA performance, with transparent reporting that builds trust in the new global RTM platform.
In practical terms, what does it mean for an RTM platform to be API-first, and why should our CIO or RTM lead really care about that when tying it into ERP, tax systems, and external data feeds?
C1121 Explainer on API-first architecture in RTM — In the context of CPG route-to-market execution for emerging markets, what does an API-first architecture mean for a sales and distribution platform, and why should a CIO or Head of RTM care about this when integrating the platform with ERP, tax systems, and external data sources?
In CPG route-to-market execution, an API-first architecture means the sales and distribution platform is designed from the outset to expose and consume standardized, well-documented interfaces for masters, transactions, and reference data. CIOs and Heads of RTM should care because API-first significantly simplifies integration with ERP, tax systems, and external data sources, while reducing long-term change and maintenance risk.
API-first RTM platforms typically handle all major operations—such as outlet onboarding, order creation, claims, price updates, and scheme configuration—through consistent APIs rather than custom, one-off integrations. This makes it easier to plug into SAP or other ERPs, connect tax and e-invoicing portals, or feed data into BI tools and eB2B marketplaces using repeatable patterns.
For RTM leaders, the benefits include faster rollout of new distributors or channels, more reliable sync of secondary sales and scheme data, and easier adaptation when regulations or internal systems change. Without an API-first approach, integrations tend to be brittle, vendor-specific, and costly to modify, which directly impacts reporting reliability, compliance, and the ability to scale route-to-market strategies across markets.
Offline-first field execution and UX reliability
Focus on offline-capable field apps and reliable synchronization in low connectivity. Aims for a simple, fast UX that preserves order capture and store audits across remote outlets.
What are the key technical points my distribution team should use to compare vendors on offline reliability, sync performance in patchy networks, and integration with our existing distributor billing tools?
C1093 Ops view of offline and integration basics — For a food and beverage CPG company in Africa looking to streamline its route-to-market field execution, what high-level technical criteria should the Head of Distribution use to compare vendors on offline-first mobile reliability, sync performance in low-connectivity territories, and ease of integrating with existing distributor billing systems?
For a food and beverage CPG in Africa streamlining field execution, the Head of Distribution should compare vendors primarily on offline-first reliability, sync behavior in low-connectivity regions, and straightforward integration with existing distributor billing. These technical criteria directly influence journey plan continuity, order capture, and inventory visibility.
Effective mobile solutions cache route assignments, outlet lists, SKU catalogues, pricing, and scheme eligibility locally, allowing a full day of sales calls offline without data loss. The app should queue orders, collections, and visit data with time and GPS stamps, then sync incrementally over unstable 2G/3G networks, handling partial uploads and automatic retries. Sync performance should be observable via simple metrics—such as average sync time and failure rates—so operations can intervene if connectivity or device issues arise.
For distributor billing system integration, the RTM platform should support file-based or API-based exchanges compatible with common local DMS packages, including flexible customer and SKU mapping and basic tax handling. Vendors who require the distributor to replace their billing system or run complex middleware in every territory tend to face more resistance and longer deployment cycles in fragmented African markets.
Given that many of our van routes are offline all day, what offline-first requirements should we set so order capture, inventory checks, and journey plans still work smoothly when there’s zero network?
C1101 Offline-first requirements for van sales — For a beverage CPG firm in Africa that relies heavily on van sales in remote areas, what offline-first technical requirements should the Head of Field Sales define so that order capture, inventory visibility, and journey plans continue to work seamlessly even when the route-to-market mobile app has no connectivity for an entire day?
For a beverage CPG in Africa relying on van sales in remote areas, the Head of Field Sales should define offline-first requirements that allow routes to operate a full day without connectivity. These requirements should cover order capture, inventory visibility, and journey plans as self-contained, locally stored capabilities.
Concretely, the mobile app must download and cache complete route plans, outlet lists with GPS coordinates, SKU catalogs, pricing, discounts, and current stock levels at the start of the day or whenever connectivity is available. Throughout the day, reps should be able to record orders, invoices, cash collections, returns, and stock movements locally, with each transaction time-stamped and GPS-tagged. The app should handle offline stock decrement and basic credit limit checks using the last known balances, preventing overselling where possible.
Sync logic must be resilient to poor networks: queued transactions should upload incrementally, with conflict resolution rules for inventory and customer balances, and clear feedback to the user about sync status. The requirements should explicitly reject designs that disable core workflows when the network is down or that lose data on app crashes, since such failures quickly erode trust among van sales teams.
If we’re worried about field adoption, what UX and offline criteria should Regional Sales focus on so booking orders and doing store audits takes fewer clicks and syncs faster than what reps use today?
C1113 Field-centric criteria for mobile UX — For a mid-sized CPG snacks company in India concerned about field adoption of a new route-to-market mobile app, what technical UX and offline criteria should the Regional Sales Manager highlight in the evaluation so that order booking and store audits are completed in fewer clicks and with faster sync than the current tools?
For a mid-sized snacks CPG in India, the Regional Sales Manager should highlight UX and offline criteria that directly reduce taps, screen switching, and sync delays for order booking and store audits. Field reps will adopt the new RTM app only if it feels faster and more reliable than the current tools in low-connectivity conditions.
Adoption failures usually come from heavy forms, slow syncs, or apps that lock up when the network drops. In snacks, where beats are dense and call volumes high, every extra click or second per outlet multiplies into lost coverage and resistance from the field.
Evaluation criteria should therefore demand: single-screen or minimal-step order capture, with smart defaults based on recent orders; offline capture of orders, surveys, and photo audits with clear sync status indicators; background or scheduled sync that does not block the rep; performance targets (e.g., app loading time, time to submit an order) measured on realistic low-end devices; and configurable forms so unnecessary fields can be hidden. The RSM should also ask vendors to demonstrate typical beats end-to-end in offline mode during evaluation, with real device tests rather than only emulator demos.
For our field apps, what does an offline-first design actually involve, and how does it affect order capture reliability and rep adoption in low-network areas?
C1122 Explainer on offline-first mobile design — For CPG manufacturers managing route-to-market operations across distributors and retail outlets in India and Africa, what does offline-first mobile design mean in practice for field execution apps, and how does it impact order capture reliability and user adoption in low-connectivity territories?
Offline-first mobile design in CPG field execution means the SFA app behaves as if it is always online, even when there is no network, by caching all required data locally and queuing every transaction for later sync. This directly improves order capture reliability in India and Africa by ensuring sales reps can complete calls, generate order acknowledgements, and follow journey plans without depending on mobile coverage.
In practice, offline-first field apps preload beats, outlet masters, price lists, schemes, and assortments on the device, and store orders, surveys, and photo audits in a local database with clear sync status flags. Reliable apps handle versioning of master data, conflict resolution when the same outlet is updated by multiple users, and idempotent order posting so that orders created offline are not duplicated on the server during burst syncs. Strong UX patterns show reps whether data is saved locally, queued for sync, or fully confirmed, which reduces anxiety and re-entry.
Most organizations see higher user adoption when workflows are optimized for offline conditions rather than treated as an exception. Reps trust the system if it never blocks order capture, is fast on low-end Android devices, and recovers gracefully after days without connectivity. This design reduces failed visits, disputes with distributors about “missing” orders, and back-office rework, and it stabilizes core execution metrics such as strike rate, call compliance, and fill rate in low-connectivity territories.
Master data, DMS integration, and data governance
Addresses master data governance, central control with local flexibility, and DMS-to-RTM data integration.
In our RFP, how should we specify integration and data-governance so outlet and SKU masters are centrally governed but local markets still have some flexibility on attributes?
C1102 Balancing central MDM and local flexibility — When an Indian CPG company in the snacks segment is evaluating route-to-market platforms, what integration and data-governance criteria should the RTM CoE lead include in the RFP to ensure that outlet and SKU master data is centrally governed while still allowing local markets some flexibility in attributes?
For an Indian snacks CPG, the RTM CoE should specify RFP criteria that enforce a single, centrally governed outlet and SKU master while allowing local attribute extensions through controlled, metadata-driven fields. Central governance of IDs and hierarchies protects analytics and scheme integrity, while configurable local attributes preserve flexibility for regional packs, promotions, and channel nuances.
Most RTM programs succeed when the “source of truth” for outlet and SKU identity is unambiguous, versioned, and owned at the corporate level, and when all DMS, SFA, and TPM modules consume this master only through governed APIs. Failure modes usually appear when distributors or regions can create or edit core IDs in their own systems, leading to duplicate outlets, ghost SKUs, and untraceable scheme leakage.
RFP language should therefore ask vendors to demonstrate: a central MDM layer for outlet and SKU codes; role-based control over who can create or change master records; API-only replication of masters into local instances; support for local, non-key attributes (e.g., local nickname, custom price segment, neighborhood tag) that never alter the global ID; and audit trails for all master-data changes. The RTM CoE should also require clear rules for master-data conflict resolution, automated de-duplication, and how local requests for new outlets/SKUs are approved and promoted to the central master before being used in transactions.
To standardize reporting across distributors with different DMS systems, what MDM and API requirements should our RTM CoE define so outlet, distributor, and SKU IDs stay consistent?
C1112 Ensuring consistent master data across distributors — When a home care CPG company in Africa wants to standardize its route-to-market reporting across distributors, what master data and API integration criteria should the RTM CoE specify so that outlet, distributor, and SKU identities are consistent regardless of the local DMS used by each distributor?
To standardize RTM reporting across African distributors using different DMS solutions, the RTM CoE must specify master-data and API criteria that enforce consistent outlet, distributor, and SKU identities irrespective of local systems. Central identity control allows heterogeneous DMS environments to feed one comparable reporting layer.
Inconsistent IDs are the most common reason why consolidated dashboards fail or why finance and sales teams argue over numbers. When each distributor can create its own codes, duplication and mapping errors become routine, undermining confidence in coverage, fill-rate, and scheme ROI metrics.
The CoE should therefore require: a central MDM service for outlet, distributor, and SKU masters; API-based distribution of this master into each distributor’s DMS; mandatory use of central IDs in all data exchanges; and a standard API contract for secondary sales, stock, and claims uploads that references these IDs. Criteria should also cover automated validation rules (rejecting or flagging records with unknown or mismatched IDs), a governance process for onboarding new outlets or SKUs, and tools for periodic de-duplication and hierarchy alignment across distributors.
Since many of our distributors run old offline DMS software, what should our Distribution head look at technically and operationally to connect them to a modern RTM control tower without creating data delays or quality problems?
C1115 Integrating legacy DMS into modern RTM — For a household products CPG company in India that has historically run offline DMS systems at distributors, what technical integration and change-management criteria should the Head of Distribution evaluate to ensure that connecting those legacy DMS instances to a modern route-to-market control tower does not introduce data latency or quality issues?
For an Indian household products company with legacy offline DMS at distributors, the Head of Distribution should evaluate both integration and change-management criteria to connect those systems to a modern RTM control tower without creating latency or data-quality issues. The integration must respect distributor realities while delivering timely, reliable secondary-sales visibility.
Legacy DMS environments often rely on batch exports, manual file handling, and inconsistent masters, which can introduce delays of days and frequent mismatches. Pushing real-time expectations onto such systems without adaptation typically leads to broken feeds and finger-pointing between Sales, IT, and distributors.
Criteria should therefore cover: supported methods for extracting data from offline DMS (files, lightweight agents, or APIs where available); minimum refresh frequencies clearly agreed (e.g., daily end-of-day vs intra-day for key distributors); validation rules and mapping logic to align distributor codes with central masters; and monitoring tools that highlight missing or anomalous uploads. On change management, the Head of Distribution should require a rollout plan for distributor onboarding, training on upload processes, simple diagnostic tools for distributors, and escalation paths when data fails. Pilot acceptance should explicitly test latency against agreed SLAs and the robustness of reconciliation between DMS and control-tower views.
When we talk about MDM for outlets and SKUs in RTM, what exactly are we referring to, and why is it seen as a must-have before we can trust distributor dashboards and promotion ROI reports?
C1123 Explainer on MDM in RTM systems — In the context of CPG route-to-market management systems for emerging markets, what is master data management for outlets and SKUs, and why is it considered a prerequisite for reliable distributor performance dashboards, promotion ROI analytics, and control-tower reporting?
Master Data Management for outlets and SKUs in CPG route-to-market systems is the discipline of maintaining a single, consistent identity, hierarchy, and attribute set for every outlet and product across DMS, SFA, TPM, and ERP. It is considered prerequisite because any duplication, mismatch, or outdated mapping in outlet or SKU masters immediately corrupts distributor performance dashboards, promotion ROI analytics, and control-tower views.
When outlet IDs differ between SFA, DMS, and ERP, numeric distribution, strike rate, and micro-market penetration cannot be measured reliably, and territory performance appears fragmented. Similarly, inconsistent SKU codes or pack definitions lead to errors in volume, mix, and price realization analysis, breaking promotion uplift calculations and scheme ROI attribution. Clean masters also underpin predictive models such as SKU velocity, perfect store scores, and cost-to-serve per outlet.
Most mature RTM programs therefore implement strict MDM processes: governed code structures, approval workflows for new outlets and SKUs, de-duplication routines, and periodic reconciliation between RTM and ERP masters. This investment reduces downstream effort in claim validation, anomaly investigation, and audit queries and enables stable, trusted control-tower reporting that can be used confidently by Sales, Finance, and Supply Chain.
ERP integration quality, data reconciliation, and finance control
Emphasizes data reconciliation between RTM and ERP, reliable API behavior under high volumes, and automated settlements and promotions.
As a CFO, how should I judge whether data from sales, promotions, and distributor claims in the RTM system will stay fully auditable and reconciled when it flows into our ERP and tax/e-invoicing stack?
C1094 CFO evaluation of data reconciliation — In the context of an Indian personal care CPG manufacturer modernizing its route-to-market analytics and control tower, how should the CFO evaluate technical integration criteria to ensure that sales, trade promotion, and distributor claims data flowing from the RTM system into ERP remain auditable, reconciled, and aligned with tax and e-invoicing compliance requirements?
When modernizing RTM analytics and a control tower in India, the CFO should evaluate integration criteria that guarantee auditability and compliance: consistent mapping of sales, promotions, and claims into ERP, reconciled totals, and alignment with GST and e-invoicing rules. The RTM platform must behave like an extension of Finance, not a parallel ledger.
Key technical expectations include clear data models for secondary sales, trade schemes, and claims that align with ERP document types; deterministic mapping rules for tax codes, GST registrations, and place-of-supply; and immutable audit trails for any transformation applied between RTM and ERP. The integration should support e-invoicing requirements where applicable, ensuring that invoice data generated or captured in RTM is compatible with government portals and ERP tax engines.
The CFO should also ask for reconciliation reports that routinely compare RTM-derived revenue, discounts, and scheme costs with ERP postings by period, customer, and SKU, flagging differences. Vendors that cannot demonstrate end-to-end traceability—from field order or distributor invoice through to ERP financial postings and tax reports—expose the organization to both audit challenges and internal disputes between Sales and Finance.
If we integrate the platform with SAP for distributor billing and secondary sales, which API and data-mapping checks should our IT team run in the pilot to avoid mismatches between RTM data and SAP financial entries?
C1097 API mapping checks for SAP integration — For a dairy CPG manufacturer in India integrating a new route-to-market platform with SAP for distributor billing and secondary sales, what specific API and data-mapping criteria should the IT integration manager validate during pilots to ensure that there are no mismatches between RTM transactions and SAP financial postings?
For a dairy CPG in India integrating a new RTM platform with SAP, the IT integration manager should validate specific API and data-mapping criteria during pilots to avoid mismatches between RTM transactions and SAP postings. The objective is one-to-one traceability from field or distributor activity to financial documents.
Critical checks include alignment of customer and material IDs, tax codes, and pricing conditions between RTM and SAP master data; clear mapping of RTM transactions (orders, invoices, returns, schemes, and claims) to SAP document types (such as sales orders, billing documents, and credit notes); and consistent handling of GST fields, including place of supply and tax breakdowns. The APIs or integration layer must support idempotent operations to prevent duplicate postings and provide correlation IDs so that errors can be traced back to specific RTM records.
During pilot, the integration manager should run parallel cycles comparing RTM-derived reports to SAP financial balances by customer, SKU, and period, with special attention to discounts, scheme accruals, and claim settlements. Systems that cannot generate reconciliation logs or that rely on manual adjustments in SAP undermine both audit confidence and the perceived reliability of the RTM platform.
If we move from spreadsheets to a full RTM system, how can my RTM operations team tell whether your ERP and tax integrations are mature enough that they won’t break and disrupt month-end closing?
C1098 Ops risk view of ERP integration maturity — When a snack foods CPG company in Africa is evaluating route-to-market systems to replace spreadsheet-based distributor management, what technical criteria should the Head of RTM Operations use to judge whether the vendor’s ERP and tax portal integrations are mature enough to avoid integration failures that could disrupt monthly closing?
For a snack foods CPG in Africa replacing spreadsheet-based distributor management, the Head of RTM Operations should judge ERP and tax portal integrations primarily on maturity and reliability, not just feature promises. The key is to ensure monthly closing and statutory reporting continue without disruption.
Technical maturity is indicated by pre-built connectors to common ERPs, documented and supported data models for customers, SKUs, invoices, and taxes, and proven deployments in similar markets. The integration should handle scheduled batch uploads or near real-time sync with automatic retries, error queues, and clear status dashboards so Operations and Finance can see whether all transactions have flowed through before closing books. For tax portals, even in less-regulated markets, the RTM system should produce tax-compliant invoice data and summaries that align with local requirements and can be easily imported into government or third-party compliance tools.
Head of RTM Operations should also require test scenarios in pilot—such as high transaction volumes, backdated adjustments, and credit notes—to observe how the integration behaves under stress. Vendors that rely on manual file exchanges without robust validation, or that cannot demonstrate end-to-end reconciliations, present substantial risks to closing timelines.
Because we run SAP S/4HANA, what should our CIO ask you about API throttling, error handling, and rollback so high-volume secondary sales syncs don’t slow down or destabilize SAP?
C1103 Protecting ERP performance in high-volume syncs — For a multinational CPG company using SAP S/4HANA in Southeast Asia, what specific technical questions should the CIO ask a route-to-market vendor’s sales and solution team about API throttling, error handling, and rollback mechanisms to ensure that high-volume secondary sales syncs do not impact ERP performance?
For a multinational using SAP S/4HANA in Southeast Asia, the CIO should ask RTM vendors detailed questions on API throttling, error handling, and rollback to confirm that high-volume secondary sales syncs are controlled, predictable, and non-disruptive to ERP. The goal is to ensure that nightly or intra-day bulk syncs never compete with core SAP processes such as MRP, financial closes, or tax runs.
Stronger due diligence focuses on how the RTM platform limits concurrent calls, prioritizes traffic, and recovers cleanly from failures. A common failure mode is uncontrolled parallel posting (e.g., thousands of sales documents in a short window) that saturates SAP application servers, triggers lock contentions, and leaves partial postings that then require manual clearing.
Technical questions typically include: what throttling controls exist by integration type, company code, or time window; how the connector queues and batches transactions; what retry strategy is used on HTTP, authentication, or functional errors; how the RTM system guarantees idempotency so duplicate postings are avoided; what rollback or compensation logic is applied when only part of a batch succeeds; how error logs are exposed to IT for monitoring; and what performance benchmarks the vendor has achieved with similar S/4HANA volumes in comparable CPG environments.
How can our Trade Marketing head make sure promotion setup and scan-based claim modules are tightly integrated with both DMS and ERP, so promotion ROI reports use one reconciled data set?
C1104 Integration for trusted promotion ROI — When a mid-tier CPG manufacturer in India is drafting technical selection criteria for its new route-to-market platform, how should the Head of Trade Marketing ensure that promotion setup and scan-based claims modules are tightly integrated with both the DMS and ERP so that promotion ROI analysis is based on a single reconciled data set?
The Head of Trade Marketing should embed criteria that make promotion setup and scan-based claims sit on a single, reconciled transaction backbone shared by DMS and ERP. Promotion ROI is only credible when scheme definitions, eligible SKUs/outlets, scanned proofs, and financial postings all reference the same master data and transaction IDs.
Operationally, this means promotions cannot live as a standalone configuration in the RTM tool; they must be tightly coupled to the SKU and outlet masters, order lines, and invoice records that flow into ERP. A frequent failure mode is when scheme eligibility logic in the RTM system does not match the logic that Finance uses in ERP, leading to disputes, leakage, and rejected ROI analysis.
Selection criteria should therefore require that: promotion master data (scheme parameters, validity, eligibility) is versioned and linked to central SKU/outlet masters; DMS- and field-captured sales that drive scheme accruals are synchronized to ERP with scheme identifiers intact; scan-based claim evidence is anchored to specific invoices or secondary sales records; and the RTM analytics layer reads from a reconciled dataset where sales, claims, and settlements are joined on common keys. The RFP should also ask vendors how Finance can validate scheme calculations independently using the same underlying transaction set.
From a Finance Controller’s perspective, how do we test whether claim approvals and promotion settlements in the platform can auto-post into ERP with almost no manual reconciliation?
C1111 Reducing finance manual work via integration — For a beverage CPG company in Southeast Asia trying to reduce manual work in its route-to-market finance operations, what technical integration criteria should the Finance Controller use to test whether claim approvals and trade promotion settlements in the RTM system can auto-post to ERP with minimal manual reconciliation steps?
The Finance Controller should use integration criteria that test whether trade-promotion approvals and claim settlements in the RTM system can generate ERP postings automatically with minimal manual steps, while maintaining reconciliation integrity. The objective is to move from spreadsheet-based reconciliations to controlled, system-to-system flows that Finance can audit.
Most manual workload persists when RTM and ERP treat promotions as separate lifecycles, forcing teams to re-key settlements or adjust GL entries later. This leads to errors, timing differences, and disputes with Sales or distributors over what was actually paid or accrued.
Evaluation should therefore require that: approved claims in RTM map unambiguously to ERP documents (credit notes, journal entries, or accruals) with shared IDs; posting logic is driven by a configurable mapping of scheme types to GL accounts and tax treatments; error messages and posting failures are surfaced in Finance-friendly dashboards; and reconciliation reports can show one-to-one ties between RTM claims and ERP entries. Test scenarios in the RFP should include high-volume claim batches, partial rejections, and retroactive scheme changes to see how the integration handles complex real-world finance flows.
If we use embedded distributor financing in the platform, what extra technical and integration checks should CFO and CIO put in place around data segregation, API security, and ERP reconciliation to manage risk?
C1116 Risk controls for embedded finance integration — When a beverage CPG company in Africa is selecting a route-to-market platform that offers embedded distributor financing modules, what additional technical and integration criteria should the CFO and CIO define around data segregation, API security, and reconciliation with ERP to manage financial risk?
When selecting an RTM platform with embedded distributor financing, the CFO and CIO should define additional criteria around data segregation, API security, and ERP reconciliation to manage financial risk. Financing flows must be tightly controlled, clearly separated from operational data, and fully aligned with the company’s financial systems and policies.
Risk emerges when credit limits, repayment events, or interest calculations inside the RTM platform diverge from what ERP and Finance recognize. If the same system holds both trade transactions and lending logic without clear boundaries, controls over exposure and compliance can weaken.
Technical requirements should include: strict segregation between financing data and core RTM data at schema and access levels; secure APIs to any external lenders with strong authentication, authorization, and encryption; and traceable mappings from financing events (disbursements, repayments, fees) to ERP postings and GL accounts. The CFO and CIO should also ask about audit trails for changes to credit limits, role-based controls for who can approve financing, and reconciliation reports that tie financing balances in RTM to ERP subledgers at any point in time.
If we need RTM data in our existing BI stack, what should our analytics lead specify about API access, data latency, and history retention so your dashboards and ours stay aligned?
C1117 Aligning RTM data with enterprise BI — For a personal care CPG company in Southeast Asia that wants its route-to-market platform to feed data into existing BI tools, what technical criteria should the data and analytics lead define regarding API access, data latency, and historical data retention so that RTM dashboards and enterprise analytics stay in sync?
For a personal care CPG in Southeast Asia feeding RTM data into existing BI tools, the data and analytics lead should define criteria on API access, latency, and historical retention that keep RTM dashboards and enterprise analytics aligned. RTM must act as a reliable data source, not a competing reporting silo.
Discrepancies arise when RTM exports and enterprise data warehouses use different refresh cycles, schemas, or filters, causing Sales and Finance to see conflicting numbers. Over time, this undermines trust in both the RTM platform and the central BI program.
Key criteria include: well-documented, stable APIs or data-extract mechanisms for all relevant entities (masters, orders, claims, inventories); configurable data-latency options (e.g., near-real time vs scheduled batch), with clear SLAs; and policies for how long detailed transactional and historical data are retained and accessible. The lead should also require metadata documentation, schema-change notification processes, and test environments where BI teams can validate integration patterns before production use.
From a Finance and IT standpoint, what does RTM–ERP data reconciliation actually entail, and how does getting it right cut audit risk and manual effort on promotions and distributor claims?
C1124 Explainer on RTM–ERP data reconciliation — For finance and IT leaders in CPG companies digitizing route-to-market processes, what does data reconciliation between a route-to-market platform and ERP involve, and how does it reduce audit risk and manual workload in areas like trade promotion settlements and distributor claims?
Data reconciliation between a route-to-market platform and ERP in CPG involves systematically matching and validating every commercial event—orders, invoices, returns, claims, and settlements—across the two systems, using common keys and agreed business rules. Done well, this reconciliation reduces audit risk by proving that RTM operational records align with the ERP’s books of record, and it reduces manual workload by automating checks that are otherwise done in spreadsheets.
In practice, reconciliation covers secondary sales and trade promotion flows end-to-end: RTM captures orders, generates or ingests distributor invoices, and validates schemes and claims; the ERP recognizes revenue, provisions for schemes, and posts debit/credit notes. Finance and IT define mapping tables for SKUs, tax codes, and GL accounts, and use APIs or scheduled jobs to compare totals, identify mismatches in quantities, values, tax, or claim status, and generate exception lists instead of full manual review. Control reports often summarize differences by distributor, period, and document type.
This discipline reduces audit findings related to unverifiable promotions, double-settled claims, or unexplained variances between RTM and financial ledgers. It also lowers effort in monthly and quarterly closes, because finance teams resolve a smaller set of structured exceptions rather than reworking entire data sets.
Pilot-to-rollout playbook: fast go-live with cross-functional alignment
Outlines pilot acceptance, standard RFP bundles, and rollout governance to minimize disruption and demonstrate credible business value.
From a sales leadership angle, what technical non-negotiables should we demand around mobile app ease-of-use, number of clicks to book an order, and seamless flow of secondary sales into our current dashboards so reps don’t push back after go-live?
C1095 Sales leadership non-negotiables for UX — For a cosmetics CPG company in Indonesia deploying a new route-to-market platform for field sales and distributor management, what non-negotiable technical criteria should the sales leadership insist on around mobile app usability, click-efficiency of order capture, and smooth integration of secondary sales into existing performance dashboards to avoid user resistance post-go-live?
For a cosmetics CPG in Indonesia, sales leadership should treat mobile usability, click-efficient order capture, and seamless dashboard integration as non-negotiable to avoid user resistance. Field reps will only adopt the RTM platform if it makes daily work faster than existing manual or messaging-based processes.
On usability, criteria include responsive performance on mid-range Android devices, intuitive navigation in local languages, and minimal training time. Order capture flows should be benchmarked by number of taps per standard order: from outlet selection to order submission, the process should reuse previous orders, favorites, or auto-suggested assortments, with offline operation and clear handling of OOS SKUs and schemes. Any additional steps—photo audits, survey questions, or scheme enrolments—should be optional or context-triggered, not mandatory on every call.
Secondary sales and activity data from the app should feed automatically into existing performance dashboards and incentive reports without extra spreadsheet work. This means pre-defined connectors or exports into the company’s BI tools, consistent outlet and SKU master mapping, and near real-time refresh cycles. If the new platform forces sales managers back into Excel to reconcile numbers, field resistance typically increases sharply after initial go-live.
If we need to go live before the next seasonal spike, what should our CSO ask you about pre-built ERP connectors, standard data models, and rollout accelerators to judge if a 30-day go-live is genuinely realistic?
C1100 Testing 30-day RTM go-live claims — When a confectionery CPG company in India wants to deploy a route-to-market platform quickly ahead of a seasonal sales peak, what questions should the CSO ask the vendor’s sales team about pre-built ERP connectors, standard data models, and rollout accelerators to realistically assess whether a 30-day go-live is achievable?
When aiming for a 30-day go-live before a seasonal peak, the CSO should ask focused questions about pre-built ERP connectors, standard data models, and rollout accelerators to test whether the RTM vendor can truly move at that speed. The goal is to expose dependency on custom work early.
Key questions include: which ERP versions and modules the vendor already supports in production; what specific objects (customers, SKUs, pricing, invoices, schemes) are covered by standard mappings; and what typical integration timelines are for similar clients. On data models, the CSO should request documentation of the standard outlet, territory, and SKU structures and ask what minimal master data is required to start, along with how quickly the vendor can ingest existing outlet universes and price lists.
For rollout accelerators, the CSO should probe for pre-configured scheme templates, default dashboards for numeric distribution and fill rate, and pilot playbooks that limit customization. Asking for a concrete 30-day plan—week-by-week milestones, resource expectations from the client side, and clear cut-offs for changes—helps distinguish vendors with repeatable approaches from those relying on optimistic estimates.
In our RFQ, what cross-functional technical criteria should Strategy insist on so API design, ERP integration, offline UX, and MDM are assessed together rather than as separate checkboxes?
C1105 Cross-functional alignment on tech criteria — For a personal care CPG company in Africa preparing an RFQ for a route-to-market solution, what cross-functional technical criteria should the strategy office mandate so that API-first architecture, ERP integration, offline field UX, and master data governance are evaluated together instead of as isolated checklist items?
For a personal care CPG in Africa, the strategy office should define cross-functional criteria that treat architecture, integration, offline UX, and master data governance as a single operating model rather than isolated checkboxes. The RTM stack must be API-first, ERP-aware, field-resilient, and data-governed in one coherent design.
In practice, this means ensuring that the same design choices that make API integration clean also support offline-first mobile operation and consistent outlet/SKU identity across markets. A common failure mode is selecting a solution that scores well on ERP connectors but performs poorly offline, or one that has a slick mobile app but no robust master-data controls, forcing workarounds during rollout.
Mandated criteria should include: clear API standards and documentation for ERP and tax connectivity; offline-capable mobile apps with deterministic sync rules and conflict resolution; a central master data model for outlets, distributors, and SKUs with controlled local extensions; and governance processes that link integration SLAs, data-quality targets, and field adoption metrics. The RFQ should explicitly ask vendors to show how these dimensions are designed together—through reference architectures, sample data flows, and example governance dashboards—rather than as independent modules.
If we plan a multi-country rollout, what technical criteria should our transformation lead look at to be sure offline apps, local tax integrations, and master data structures can be replicated reliably in a hub-and-spoke model?
C1109 Ensuring replicable multi-country RTM rollout — For a CPG company in Africa that wants to roll out a route-to-market system across multiple countries, what technical and integration criteria should the digital transformation lead apply to ensure that offline-first mobile apps, local tax connectors, and master data structures can be replicated reliably using a hub-and-spoke rollout model?
For a multi-country RTM rollout in Africa, the digital transformation lead should prioritize technical and integration criteria that support a hub-and-spoke model: centrally managed offline-first apps, reusable tax and ERP connectors, and a harmonized master-data structure that can be cloned and localized with control. The intent is to make each new country a repeatable deployment, not a fresh integration project.
Hub-and-spoke RTM models work when core patterns—such as outlet and SKU identities, integration methods, and sync logic—are consistent across markets, while country-specific tax rules and languages are handled as configurable variations. A common failure mode is allowing each country team to customize integrations and data models independently, which later blocks cross-country reporting and complicates support.
Criteria should therefore ask vendors to demonstrate: a single, central configuration for mobile apps with country-level parameter sets; offline sync engines that can operate over low-connectivity networks with deterministic conflict resolution; modular tax and ERP connectors that can be reused with different instances or country codes; and a global master-data schema for outlets, distributors, and SKUs with controlled local attributes. The RFQ should also request evidence of how new country rollouts are templatized, including configuration packs, data-migration playbooks, and standard test suites.
In our pilot, what test cases should the project manager include to validate API reliability, RTM–ERP reconciliation, and end-to-end order-to-cash latency?
C1118 Designing technical acceptance tests for pilots — When a snack foods CPG company in India is designing acceptance tests for a pilot route-to-market implementation, what technical and integration test cases should the project manager include to validate API reliability, data reconciliation between RTM and ERP, and end-to-end latency for order-to-cash flows?
When designing acceptance tests for a pilot RTM implementation, the project manager should include technical and integration test cases that validate API reliability, RTM–ERP data reconciliation, and end-to-end order-to-cash latency. The pilot should prove that the platform behaves predictably under realistic volumes and failure conditions, not just in ideal demos.
API reliability tests should simulate intermittent connectivity, retries, and error responses to ensure that the RTM platform handles network and application issues without data corruption or duplicate postings. Reconciliation tests should validate that orders, invoices, and payments match between RTM and ERP for chosen sample periods.
Concrete test cases might cover: bulk order uploads from the field; posting of secondary sales and claims to ERP; handling of invalid or missing master-data IDs; recovery from partial batch failures; and measurement of total time from order capture in the mobile app to order visibility and financial impact in ERP. Acceptance criteria should define thresholds for API success rates, maximum allowed latency, and reconciliation tolerances before the pilot is deemed ready for scale.
After we scale beyond the pilot, what ongoing technical governance should CIO and RTM CoE put in place to monitor integration SLAs, data quality, and offline sync failures across countries?
C1119 Post-go-live governance for integrations — For a beverages CPG firm in Africa planning to scale its route-to-market platform after a successful pilot, what post-purchase technical governance criteria should the CIO and RTM CoE establish for monitoring integration SLAs, data quality, and offline sync failures across markets?
After a successful RTM pilot, a beverages CPG in Africa should establish technical governance criteria for monitoring integration SLAs, data quality, and offline sync failures across markets. Scaling safely requires systematic, cross-country visibility into how well integrations and field syncs are performing.
Without ongoing governance, integration issues often reappear during expansion: some markets experience higher error rates, distributors change systems, or mobile sync failures go unnoticed until sales reports diverge from reality. A control-tower view over technology health is as important as one over commercial KPIs.
The CIO and RTM CoE should define metrics and dashboards for: API uptime and latency for ERP, tax, and DMS connectors; error volumes and resolution times by market; master-data mismatch rates; and offline sync success rates, including the number of pending transactions on devices. Governance mechanisms should also include standard incident-management workflows, regular integration health reviews with IT and Operations, and clear owners for data-quality remediation in each market.
In our RTM playbook, how can Strategy explain API-first design, MDM, and offline UX in business terms so commercial leaders see them as growth enablers, not IT overhead?
C1120 Explaining tech criteria to commercial leaders — When a mid-sized CPG household products company in India is updating its internal route-to-market playbook, how should the strategy team articulate the importance of technical and integration criteria—such as API-first architecture, master data management, and offline UX—to non-technical commercial leaders so that these are seen as business enablers rather than IT overhead?
When updating the RTM playbook, the strategy team should explain technical and integration criteria—such as API-first architecture, master data management, and offline UX—in business language that links them directly to execution reliability, numeric distribution, and claim accuracy. Non-technical leaders need to see these as levers for control and growth, not as IT abstractions.
API-first can be framed as “standard connectors that prevent reporting surprises when ERP or tax rules change.” Master data management can be described as “one outlet and SKU identity across all distributors so coverage, scheme ROI, and fill-rate dashboards are trusted.” Offline UX can be positioned as “apps that keep taking orders even with poor network, so beats and strike rates don’t collapse during outages.”
The playbook should use concrete examples: how poor masters created distributor disputes; how slow or unreliable apps reduced calls per day; or how brittle integrations delayed claim settlements. By tying each technical criterion to familiar RTM pain points and KPIs, commercial leaders will treat technology decisions as part of the operating model, not an isolated IT concern.