Thought Leadership

How your company can adopt a Usage-Based business model like AWS

July 13, 2022

This blog post was also featured on TechCrunch.

Of the 300+ services that Amazon Web Services (AWS) has released over the years, not a single one has needed to be rolled back from being priced incorrectly.

That’s no accident.

Usage-based pricing (UBP) delivers very high levels of revenue growth and product adoption. Any company operating in the cloud can (and should) adopt a usage-based pricing model enabled by best practices and frameworks that optimize the value being provided to the customer.

I worked for a few years as a general manager at AWS, where I personally oversaw Amazon CloudSearch and some of the attached services as they scaled to over a billion dollars in revenue on the back of a usage-based pricing model. Later, when we launched Amazon OpenSearch, the same model contributed to its massive success and adoption.

The usage-based model is the only one that makes sense in the cloud. Given the elastic nature of the underlying infrastructure, anything that gets layered on top needs to be just as flexible — and that includes pricing. Below, I lay out six steps to get started with a usage-based pricing model at your business.

Step 1: Implement usage metering

Many companies make the mistake of starting with a pricing model and then trying to backpedal into measuring usage.

This is the wrong approach. The first step needs to be metering all of your technology artifacts. If you are a startup building from scratch, implementing metering right at the outset will give you a tremendous advantage.

Knowing who is using what, when, where and how much will help you unlock valuable insights across all functional groups and teams, and make determining pricing much more straightforward.

Seek a purpose-built metering service

Do not fall into the trap of metering (usage instrumenting) only the items the pricing plans dictate. Meter your technology stack holistically. You should be thinking about metering first and then moving forward into pricing and billing, not backing into metering from pricing and billing.

Here’s a good way to internalize this:

Let’s say you know you want to charge on API calls, and you begin by considering a tiered pricing model for the API calls by count. Working your way backward from this into metering, you will arrive at the conclusion that you need to meter the number of API calls.

Contrast this with the metering forward approach. First you determine you need to usage instrument API calls because it is one of the core features of how your customers engage with your product. Then, you ask yourself, what is a holistic way to usage instrument an API call?

The answer is, it needs to be metered three ways: count, payload and duration. You need three meters, not just count alone. You may ultimately decide only to charge on count, but believe me, you will find having the same level of visibility into payload and duration gives you valuable insights into a customer’s overall usage profile, which in turn helps you keep the count charge optimized.

Reduce (ideally eliminate) the number of data hops when collecting meter data

The single most important reason metering exists as its own artifact is because of what it is specifically designed to do, track usage — what was used by whom, when, where and how much — and do it accurately, consistently and at scale. Accuracy is the key.

Unlike other technology primitives (like logging or monitoring) that may also bear a semblance of a data ingestion pipeline, metering is the only one that, as a matter of its underlying foundational design principle, has to be accurate.

That is, records cannot be dropped. Records cannot be double-counted (or not counted). Why? Because inevitably it will feed data into a billing system. Across your organization, you need one system of record for metering that you can always point to and rely on with full confidence for holding accurate usage and consumption data.

To guarantee accuracy, the best practice for a metering system is to deploy it at the source (where events are originating), and stream events directly to the meter data store system (without any intermediary hops into a staging data warehouse or data lake) with the required ETL processes.

This is how you can leverage the metering system to do what it was designed to do: ingest events at scale, and transform and aggregate usage data accurately, reliably and consistently on the backs of the software design principles of idempotency, data deduplication and full data lineage (audit trail).

This is important, because at some point, someone is inevitably going to question an invoice, a bill or usage. You need to have a single source of truth with data lineage to quickly and accurately unpack and address such requests. If the data has traversed through multiple hops, and maybe even some intermediary transformations, before arriving at the metering system, you have lost the ability to accurately and quickly trace back lineage and the root cause of error.

AWS teams deploy metering into the stack right at the outset of any new development, and certainly as a prerequisite to even begin thinking about what the customer-facing pricing model ought to be. If you do this right and put the correct metering infrastructure in place, you are on your way to fully leverage and optimize the benefits of usage-based pricing. If you don’t, you will always be playing catch-up and spending time in remediation. I cannot overstate this fact.

When seeking and evaluating a metering service, evaluate it with the checklist of what a platform-oriented service ought to be. Consider if the service is robust, scalable, serverless and full featured. Ask if it is delivered and priced as a platform-level service (that is, usage based with a discounting tier as volume increases). Make sure it is deployable and available as a developer-friendly, platform-level service that is fully API-enabled (data in, data out) and SDK-rich.

Step 2: Build a usage model (before the price model)

A usage model shows overall aggregated usage by product features, filtered by customers and custom groupings. It is the real-time dashboard that serves to answer accurately what was used by whom, when, where and how much.

A usage model is the output of a full-featured metering service. There should be no delay and no custom work or design needed to see the raw events being ingested, aggregated, grouped in real time, sliced over a time series and displayed in tabular and graphical format on a usage dashboard.

A usage model should answer time series-based usage questions in real time, accurately. For example:

  • What was the total count of API calls on April 18, 2022?
  • For customer X, what was the API call count on January 30, 2022?
  • For customer X, what was the peak API call count on January 30, 2002?
  • For customer Y, what was the API call count from April 1 through June 30, 2021?
  • For customer Y, how much storage was used on July 3, 2022?
  • For customer Z, what was the max storage used in February 2020, 2021, 2022?
  • Who are the top N customers using this meter (feature) in a specific region over last week?

With this level of insight, you are now on your way to institutionalizing usage-based pricing to scale with your business and growth for the long term.

Step 3: Build a cost model

A cost model is essentially a dashboard of the cost footprint of cloud resources consumed by your services as a whole, grouped by customer and custom attributes. This is a step often overlooked or left to chance. The good news is, you already have the tools and artifacts used in steps 1 and 2 to conquer this without much extra effort.

Classify meters into two types:

  1. Usage meters: What your customers are using of your products and services.
  2. Cost meters: What your products and services are using of the underlying cloud resources.

The end goal is to come up with the most optimized usage-based pricing plan. With steps 1 and 2 in place, we now have a reliable view into which product features and services are being used by whom, when, where and how much.

The question now is: What are the select few usage meters that are good candidates for a customer-facing pricing plan?

For example, you may have an API call for which the payload size can vary vastly based on the customer profile. In this case, you may choose to price on the storage cost and not on the API count, or a combination of the two. Or, if your API call execution results in an ETL type farm-out job, you may want to price only on the duration and not on the raw count.

The cost meters offer a way to align front-end usage (API call — usage meter) to back-end usage (API call gets farmed out into a Lambda function and ETL job, S3 storage and a database write — cost meters).

With cost meters in place, you get the usage profile of the cost footprint. In the above example, you may find that the Lambda function and S3 storage charge dominate the usage-cost profile. You may then choose to spread the cost of the database write and ETL job over Lambda function and/or S3.

Run simulations

The earlier in your development process you start putting in usage instrumentation, the more historical data you have to work with from step 3 onward.

If you didn’t have the option to put in metering early and have to go to launch without having historical usage data already collected by the metering service, you should generate sample datasets. Ingest sample datasets (backfill) into your metering system (seek a metering system that supports this critical feature), and then proceed from step 2 onward.

Step 4: Build a pricing model

You are now ready to build and iterate on a pricing model. With steps 1 through 3 in place, this truly will be a fun, empowering, insightful and data-backed exercise. Keep in mind that pricing is sort of ephemeral in nature. Pricing will come and go and change over time. The usage record is the historical record. If you have (historical) usage data, pricing can be built and modeled at any time.

Begin by building at least two different pricing plans. Explore different line items and rates. Seek a usage-based pricing and billing service that, in addition to creating pricing plans with various usage rate types (unit, block, volume, tiers, etc., which are table stakes), also has the built-in capability to work over large volumes of usage data by connecting directly to a metering service and perform price simulations in real time.

Since you already have usage data metered and available, this should be a snap. Note that the pricing and billing app is not taking ownership of the accuracy of the usage data or the ingestion and collection of usage data. That is the function of the metering platform. The pricing and billing application defines and applies the rate card (or pricing plans) and generates on-demand, real-time metered invoicing and billing.

Build your pricing plans based on the customer-facing posture you wish to take (e.g., free-tier, free-trial, multiple plans or only line items-based, credits-based). As long as you have a metering ingestion stream working and scaling independent of pricing and billing, you are well positioned for scale, growth and whatever comes in the future.

Step 5: Do a beta test

Once steps 1 through 4 are done, you should now beta test with a set of customers. Keep in mind that the steps outlined are designed so that by the time you get to beta testing, you have a near-final pricing plan — an advantage of having a metering service in place early. You should beta test, if possible, at least for one or two monthly billing cycles to surface any edge cases.

Just like any mature software development process, beta testing with real customer usage data is good hygiene and provides the opportunity to catch any outlier usage patterns or surprises. A full-featured usage-based pricing and billing application should now be generating real-time, current invoices with line item-level metered usage data and price breakdowns.

Additionally, across metering and billing, you should have full lineage and visibility into the life cycle of each event, from ingestion to pricing, invoicing and billing. Once you have run it for one to two monthly billing cycles and have verified usage and billing using a built-in data-lineage pipeline, you are ready to go into production with full confidence.

Step 6: Continuous price modeling

The pace of innovation is the hallmark of cloud businesses. As you come up with new products and features, or even as you scale your customer base on the existing pricing plan, you’ll need a built-in price-modeling tool as your guide.

The price-modeling tool will help product, sales, finance and accounting teams with their respective planning needs and with what-if scenarios that are reliable and trustworthy. Having forecasting built into the pricing and modeling tool as a first-class object provides additional insights for future planning and business operations.

The rise of usage-based pricing is directly related to the pace of innovation in the cloud, the growing importance of software across all industries, more things shifting left toward the developer and the rise of product-led growth (PLG). Not all of these trends are mainstream yet, but in my view, they are all inevitable.

So if you’re reading this now, and you haven’t yet started your journey to building and implementing a usage-based model, consider it an opportunity to get started today and save yourself a lot of headache down the line.

There is a lot of discussion around usage-based pricing lacking predictability, and therefore, potentially not being a good choice for some. For now, let me just say that predictability is key, and if it is lacking, it is not because of a limitation of the model itself but rather a lack of proper tooling and infrastructure.

Read the full article on TechCrunch here.

Thought Leadership

How your company can adopt a Usage-Based business model like AWS

July 13, 2022

Our top 10 Javascript frameworks to use in 2022

JavaScript frameworks make development easy with extensive features and functionalities. Here are our top 10 to use in 2022.
Mountains
Written by
Alec Whitten
Published on
17 January 2022

This blog post was also featured on TechCrunch.

Of the 300+ services that Amazon Web Services (AWS) has released over the years, not a single one has needed to be rolled back from being priced incorrectly.

That’s no accident.

Usage-based pricing (UBP) delivers very high levels of revenue growth and product adoption. Any company operating in the cloud can (and should) adopt a usage-based pricing model enabled by best practices and frameworks that optimize the value being provided to the customer.

I worked for a few years as a general manager at AWS, where I personally oversaw Amazon CloudSearch and some of the attached services as they scaled to over a billion dollars in revenue on the back of a usage-based pricing model. Later, when we launched Amazon OpenSearch, the same model contributed to its massive success and adoption.

The usage-based model is the only one that makes sense in the cloud. Given the elastic nature of the underlying infrastructure, anything that gets layered on top needs to be just as flexible — and that includes pricing. Below, I lay out six steps to get started with a usage-based pricing model at your business.

Step 1: Implement usage metering

Many companies make the mistake of starting with a pricing model and then trying to backpedal into measuring usage.

This is the wrong approach. The first step needs to be metering all of your technology artifacts. If you are a startup building from scratch, implementing metering right at the outset will give you a tremendous advantage.

Knowing who is using what, when, where and how much will help you unlock valuable insights across all functional groups and teams, and make determining pricing much more straightforward.

Seek a purpose-built metering service

Do not fall into the trap of metering (usage instrumenting) only the items the pricing plans dictate. Meter your technology stack holistically. You should be thinking about metering first and then moving forward into pricing and billing, not backing into metering from pricing and billing.

Here’s a good way to internalize this:

Let’s say you know you want to charge on API calls, and you begin by considering a tiered pricing model for the API calls by count. Working your way backward from this into metering, you will arrive at the conclusion that you need to meter the number of API calls.

Contrast this with the metering forward approach. First you determine you need to usage instrument API calls because it is one of the core features of how your customers engage with your product. Then, you ask yourself, what is a holistic way to usage instrument an API call?

The answer is, it needs to be metered three ways: count, payload and duration. You need three meters, not just count alone. You may ultimately decide only to charge on count, but believe me, you will find having the same level of visibility into payload and duration gives you valuable insights into a customer’s overall usage profile, which in turn helps you keep the count charge optimized.

Reduce (ideally eliminate) the number of data hops when collecting meter data

The single most important reason metering exists as its own artifact is because of what it is specifically designed to do, track usage — what was used by whom, when, where and how much — and do it accurately, consistently and at scale. Accuracy is the key.

Unlike other technology primitives (like logging or monitoring) that may also bear a semblance of a data ingestion pipeline, metering is the only one that, as a matter of its underlying foundational design principle, has to be accurate.

That is, records cannot be dropped. Records cannot be double-counted (or not counted). Why? Because inevitably it will feed data into a billing system. Across your organization, you need one system of record for metering that you can always point to and rely on with full confidence for holding accurate usage and consumption data.

To guarantee accuracy, the best practice for a metering system is to deploy it at the source (where events are originating), and stream events directly to the meter data store system (without any intermediary hops into a staging data warehouse or data lake) with the required ETL processes.

This is how you can leverage the metering system to do what it was designed to do: ingest events at scale, and transform and aggregate usage data accurately, reliably and consistently on the backs of the software design principles of idempotency, data deduplication and full data lineage (audit trail).

This is important, because at some point, someone is inevitably going to question an invoice, a bill or usage. You need to have a single source of truth with data lineage to quickly and accurately unpack and address such requests. If the data has traversed through multiple hops, and maybe even some intermediary transformations, before arriving at the metering system, you have lost the ability to accurately and quickly trace back lineage and the root cause of error.

AWS teams deploy metering into the stack right at the outset of any new development, and certainly as a prerequisite to even begin thinking about what the customer-facing pricing model ought to be. If you do this right and put the correct metering infrastructure in place, you are on your way to fully leverage and optimize the benefits of usage-based pricing. If you don’t, you will always be playing catch-up and spending time in remediation. I cannot overstate this fact.

When seeking and evaluating a metering service, evaluate it with the checklist of what a platform-oriented service ought to be. Consider if the service is robust, scalable, serverless and full featured. Ask if it is delivered and priced as a platform-level service (that is, usage based with a discounting tier as volume increases). Make sure it is deployable and available as a developer-friendly, platform-level service that is fully API-enabled (data in, data out) and SDK-rich.

Step 2: Build a usage model (before the price model)

A usage model shows overall aggregated usage by product features, filtered by customers and custom groupings. It is the real-time dashboard that serves to answer accurately what was used by whom, when, where and how much.

A usage model is the output of a full-featured metering service. There should be no delay and no custom work or design needed to see the raw events being ingested, aggregated, grouped in real time, sliced over a time series and displayed in tabular and graphical format on a usage dashboard.

A usage model should answer time series-based usage questions in real time, accurately. For example:

  • What was the total count of API calls on April 18, 2022?
  • For customer X, what was the API call count on January 30, 2022?
  • For customer X, what was the peak API call count on January 30, 2002?
  • For customer Y, what was the API call count from April 1 through June 30, 2021?
  • For customer Y, how much storage was used on July 3, 2022?
  • For customer Z, what was the max storage used in February 2020, 2021, 2022?
  • Who are the top N customers using this meter (feature) in a specific region over last week?

With this level of insight, you are now on your way to institutionalizing usage-based pricing to scale with your business and growth for the long term.

Step 3: Build a cost model

A cost model is essentially a dashboard of the cost footprint of cloud resources consumed by your services as a whole, grouped by customer and custom attributes. This is a step often overlooked or left to chance. The good news is, you already have the tools and artifacts used in steps 1 and 2 to conquer this without much extra effort.

Classify meters into two types:

  1. Usage meters: What your customers are using of your products and services.
  2. Cost meters: What your products and services are using of the underlying cloud resources.

The end goal is to come up with the most optimized usage-based pricing plan. With steps 1 and 2 in place, we now have a reliable view into which product features and services are being used by whom, when, where and how much.

The question now is: What are the select few usage meters that are good candidates for a customer-facing pricing plan?

For example, you may have an API call for which the payload size can vary vastly based on the customer profile. In this case, you may choose to price on the storage cost and not on the API count, or a combination of the two. Or, if your API call execution results in an ETL type farm-out job, you may want to price only on the duration and not on the raw count.

The cost meters offer a way to align front-end usage (API call — usage meter) to back-end usage (API call gets farmed out into a Lambda function and ETL job, S3 storage and a database write — cost meters).

With cost meters in place, you get the usage profile of the cost footprint. In the above example, you may find that the Lambda function and S3 storage charge dominate the usage-cost profile. You may then choose to spread the cost of the database write and ETL job over Lambda function and/or S3.

Run simulations

The earlier in your development process you start putting in usage instrumentation, the more historical data you have to work with from step 3 onward.

If you didn’t have the option to put in metering early and have to go to launch without having historical usage data already collected by the metering service, you should generate sample datasets. Ingest sample datasets (backfill) into your metering system (seek a metering system that supports this critical feature), and then proceed from step 2 onward.

Step 4: Build a pricing model

You are now ready to build and iterate on a pricing model. With steps 1 through 3 in place, this truly will be a fun, empowering, insightful and data-backed exercise. Keep in mind that pricing is sort of ephemeral in nature. Pricing will come and go and change over time. The usage record is the historical record. If you have (historical) usage data, pricing can be built and modeled at any time.

Begin by building at least two different pricing plans. Explore different line items and rates. Seek a usage-based pricing and billing service that, in addition to creating pricing plans with various usage rate types (unit, block, volume, tiers, etc., which are table stakes), also has the built-in capability to work over large volumes of usage data by connecting directly to a metering service and perform price simulations in real time.

Since you already have usage data metered and available, this should be a snap. Note that the pricing and billing app is not taking ownership of the accuracy of the usage data or the ingestion and collection of usage data. That is the function of the metering platform. The pricing and billing application defines and applies the rate card (or pricing plans) and generates on-demand, real-time metered invoicing and billing.

Build your pricing plans based on the customer-facing posture you wish to take (e.g., free-tier, free-trial, multiple plans or only line items-based, credits-based). As long as you have a metering ingestion stream working and scaling independent of pricing and billing, you are well positioned for scale, growth and whatever comes in the future.

Step 5: Do a beta test

Once steps 1 through 4 are done, you should now beta test with a set of customers. Keep in mind that the steps outlined are designed so that by the time you get to beta testing, you have a near-final pricing plan — an advantage of having a metering service in place early. You should beta test, if possible, at least for one or two monthly billing cycles to surface any edge cases.

Just like any mature software development process, beta testing with real customer usage data is good hygiene and provides the opportunity to catch any outlier usage patterns or surprises. A full-featured usage-based pricing and billing application should now be generating real-time, current invoices with line item-level metered usage data and price breakdowns.

Additionally, across metering and billing, you should have full lineage and visibility into the life cycle of each event, from ingestion to pricing, invoicing and billing. Once you have run it for one to two monthly billing cycles and have verified usage and billing using a built-in data-lineage pipeline, you are ready to go into production with full confidence.

Step 6: Continuous price modeling

The pace of innovation is the hallmark of cloud businesses. As you come up with new products and features, or even as you scale your customer base on the existing pricing plan, you’ll need a built-in price-modeling tool as your guide.

The price-modeling tool will help product, sales, finance and accounting teams with their respective planning needs and with what-if scenarios that are reliable and trustworthy. Having forecasting built into the pricing and modeling tool as a first-class object provides additional insights for future planning and business operations.

The rise of usage-based pricing is directly related to the pace of innovation in the cloud, the growing importance of software across all industries, more things shifting left toward the developer and the rise of product-led growth (PLG). Not all of these trends are mainstream yet, but in my view, they are all inevitable.

So if you’re reading this now, and you haven’t yet started your journey to building and implementing a usage-based model, consider it an opportunity to get started today and save yourself a lot of headache down the line.

There is a lot of discussion around usage-based pricing lacking predictability, and therefore, potentially not being a good choice for some. For now, let me just say that predictability is key, and if it is lacking, it is not because of a limitation of the model itself but rather a lack of proper tooling and infrastructure.

Read the full article on TechCrunch here.

Flat Pricing Is Dead.
Explore metering and usage based billing with our advance platform.
Developer friendly, and built with LLMs in mind
Book Demo

This blog post was also featured on TechCrunch.

Of the 300+ services that Amazon Web Services (AWS) has released over the years, not a single one has needed to be rolled back from being priced incorrectly.

That’s no accident.

Usage-based pricing (UBP) delivers very high levels of revenue growth and product adoption. Any company operating in the cloud can (and should) adopt a usage-based pricing model enabled by best practices and frameworks that optimize the value being provided to the customer.

I worked for a few years as a general manager at AWS, where I personally oversaw Amazon CloudSearch and some of the attached services as they scaled to over a billion dollars in revenue on the back of a usage-based pricing model. Later, when we launched Amazon OpenSearch, the same model contributed to its massive success and adoption.

The usage-based model is the only one that makes sense in the cloud. Given the elastic nature of the underlying infrastructure, anything that gets layered on top needs to be just as flexible — and that includes pricing. Below, I lay out six steps to get started with a usage-based pricing model at your business.

Step 1: Implement usage metering

Many companies make the mistake of starting with a pricing model and then trying to backpedal into measuring usage.

This is the wrong approach. The first step needs to be metering all of your technology artifacts. If you are a startup building from scratch, implementing metering right at the outset will give you a tremendous advantage.

Knowing who is using what, when, where and how much will help you unlock valuable insights across all functional groups and teams, and make determining pricing much more straightforward.

Seek a purpose-built metering service

Do not fall into the trap of metering (usage instrumenting) only the items the pricing plans dictate. Meter your technology stack holistically. You should be thinking about metering first and then moving forward into pricing and billing, not backing into metering from pricing and billing.

Here’s a good way to internalize this:

Let’s say you know you want to charge on API calls, and you begin by considering a tiered pricing model for the API calls by count. Working your way backward from this into metering, you will arrive at the conclusion that you need to meter the number of API calls.

Contrast this with the metering forward approach. First you determine you need to usage instrument API calls because it is one of the core features of how your customers engage with your product. Then, you ask yourself, what is a holistic way to usage instrument an API call?

The answer is, it needs to be metered three ways: count, payload and duration. You need three meters, not just count alone. You may ultimately decide only to charge on count, but believe me, you will find having the same level of visibility into payload and duration gives you valuable insights into a customer’s overall usage profile, which in turn helps you keep the count charge optimized.

Reduce (ideally eliminate) the number of data hops when collecting meter data

The single most important reason metering exists as its own artifact is because of what it is specifically designed to do, track usage — what was used by whom, when, where and how much — and do it accurately, consistently and at scale. Accuracy is the key.

Unlike other technology primitives (like logging or monitoring) that may also bear a semblance of a data ingestion pipeline, metering is the only one that, as a matter of its underlying foundational design principle, has to be accurate.

That is, records cannot be dropped. Records cannot be double-counted (or not counted). Why? Because inevitably it will feed data into a billing system. Across your organization, you need one system of record for metering that you can always point to and rely on with full confidence for holding accurate usage and consumption data.

To guarantee accuracy, the best practice for a metering system is to deploy it at the source (where events are originating), and stream events directly to the meter data store system (without any intermediary hops into a staging data warehouse or data lake) with the required ETL processes.

This is how you can leverage the metering system to do what it was designed to do: ingest events at scale, and transform and aggregate usage data accurately, reliably and consistently on the backs of the software design principles of idempotency, data deduplication and full data lineage (audit trail).

This is important, because at some point, someone is inevitably going to question an invoice, a bill or usage. You need to have a single source of truth with data lineage to quickly and accurately unpack and address such requests. If the data has traversed through multiple hops, and maybe even some intermediary transformations, before arriving at the metering system, you have lost the ability to accurately and quickly trace back lineage and the root cause of error.

AWS teams deploy metering into the stack right at the outset of any new development, and certainly as a prerequisite to even begin thinking about what the customer-facing pricing model ought to be. If you do this right and put the correct metering infrastructure in place, you are on your way to fully leverage and optimize the benefits of usage-based pricing. If you don’t, you will always be playing catch-up and spending time in remediation. I cannot overstate this fact.

When seeking and evaluating a metering service, evaluate it with the checklist of what a platform-oriented service ought to be. Consider if the service is robust, scalable, serverless and full featured. Ask if it is delivered and priced as a platform-level service (that is, usage based with a discounting tier as volume increases). Make sure it is deployable and available as a developer-friendly, platform-level service that is fully API-enabled (data in, data out) and SDK-rich.

Step 2: Build a usage model (before the price model)

A usage model shows overall aggregated usage by product features, filtered by customers and custom groupings. It is the real-time dashboard that serves to answer accurately what was used by whom, when, where and how much.

A usage model is the output of a full-featured metering service. There should be no delay and no custom work or design needed to see the raw events being ingested, aggregated, grouped in real time, sliced over a time series and displayed in tabular and graphical format on a usage dashboard.

A usage model should answer time series-based usage questions in real time, accurately. For example:

  • What was the total count of API calls on April 18, 2022?
  • For customer X, what was the API call count on January 30, 2022?
  • For customer X, what was the peak API call count on January 30, 2002?
  • For customer Y, what was the API call count from April 1 through June 30, 2021?
  • For customer Y, how much storage was used on July 3, 2022?
  • For customer Z, what was the max storage used in February 2020, 2021, 2022?
  • Who are the top N customers using this meter (feature) in a specific region over last week?

With this level of insight, you are now on your way to institutionalizing usage-based pricing to scale with your business and growth for the long term.

Step 3: Build a cost model

A cost model is essentially a dashboard of the cost footprint of cloud resources consumed by your services as a whole, grouped by customer and custom attributes. This is a step often overlooked or left to chance. The good news is, you already have the tools and artifacts used in steps 1 and 2 to conquer this without much extra effort.

Classify meters into two types:

  1. Usage meters: What your customers are using of your products and services.
  2. Cost meters: What your products and services are using of the underlying cloud resources.

The end goal is to come up with the most optimized usage-based pricing plan. With steps 1 and 2 in place, we now have a reliable view into which product features and services are being used by whom, when, where and how much.

The question now is: What are the select few usage meters that are good candidates for a customer-facing pricing plan?

For example, you may have an API call for which the payload size can vary vastly based on the customer profile. In this case, you may choose to price on the storage cost and not on the API count, or a combination of the two. Or, if your API call execution results in an ETL type farm-out job, you may want to price only on the duration and not on the raw count.

The cost meters offer a way to align front-end usage (API call — usage meter) to back-end usage (API call gets farmed out into a Lambda function and ETL job, S3 storage and a database write — cost meters).

With cost meters in place, you get the usage profile of the cost footprint. In the above example, you may find that the Lambda function and S3 storage charge dominate the usage-cost profile. You may then choose to spread the cost of the database write and ETL job over Lambda function and/or S3.

Run simulations

The earlier in your development process you start putting in usage instrumentation, the more historical data you have to work with from step 3 onward.

If you didn’t have the option to put in metering early and have to go to launch without having historical usage data already collected by the metering service, you should generate sample datasets. Ingest sample datasets (backfill) into your metering system (seek a metering system that supports this critical feature), and then proceed from step 2 onward.

Step 4: Build a pricing model

You are now ready to build and iterate on a pricing model. With steps 1 through 3 in place, this truly will be a fun, empowering, insightful and data-backed exercise. Keep in mind that pricing is sort of ephemeral in nature. Pricing will come and go and change over time. The usage record is the historical record. If you have (historical) usage data, pricing can be built and modeled at any time.

Begin by building at least two different pricing plans. Explore different line items and rates. Seek a usage-based pricing and billing service that, in addition to creating pricing plans with various usage rate types (unit, block, volume, tiers, etc., which are table stakes), also has the built-in capability to work over large volumes of usage data by connecting directly to a metering service and perform price simulations in real time.

Since you already have usage data metered and available, this should be a snap. Note that the pricing and billing app is not taking ownership of the accuracy of the usage data or the ingestion and collection of usage data. That is the function of the metering platform. The pricing and billing application defines and applies the rate card (or pricing plans) and generates on-demand, real-time metered invoicing and billing.

Build your pricing plans based on the customer-facing posture you wish to take (e.g., free-tier, free-trial, multiple plans or only line items-based, credits-based). As long as you have a metering ingestion stream working and scaling independent of pricing and billing, you are well positioned for scale, growth and whatever comes in the future.

Step 5: Do a beta test

Once steps 1 through 4 are done, you should now beta test with a set of customers. Keep in mind that the steps outlined are designed so that by the time you get to beta testing, you have a near-final pricing plan — an advantage of having a metering service in place early. You should beta test, if possible, at least for one or two monthly billing cycles to surface any edge cases.

Just like any mature software development process, beta testing with real customer usage data is good hygiene and provides the opportunity to catch any outlier usage patterns or surprises. A full-featured usage-based pricing and billing application should now be generating real-time, current invoices with line item-level metered usage data and price breakdowns.

Additionally, across metering and billing, you should have full lineage and visibility into the life cycle of each event, from ingestion to pricing, invoicing and billing. Once you have run it for one to two monthly billing cycles and have verified usage and billing using a built-in data-lineage pipeline, you are ready to go into production with full confidence.

Step 6: Continuous price modeling

The pace of innovation is the hallmark of cloud businesses. As you come up with new products and features, or even as you scale your customer base on the existing pricing plan, you’ll need a built-in price-modeling tool as your guide.

The price-modeling tool will help product, sales, finance and accounting teams with their respective planning needs and with what-if scenarios that are reliable and trustworthy. Having forecasting built into the pricing and modeling tool as a first-class object provides additional insights for future planning and business operations.

The rise of usage-based pricing is directly related to the pace of innovation in the cloud, the growing importance of software across all industries, more things shifting left toward the developer and the rise of product-led growth (PLG). Not all of these trends are mainstream yet, but in my view, they are all inevitable.

So if you’re reading this now, and you haven’t yet started your journey to building and implementing a usage-based model, consider it an opportunity to get started today and save yourself a lot of headache down the line.

There is a lot of discussion around usage-based pricing lacking predictability, and therefore, potentially not being a good choice for some. For now, let me just say that predictability is key, and if it is lacking, it is not because of a limitation of the model itself but rather a lack of proper tooling and infrastructure.

Read the full article on TechCrunch here.

Subscribe to our Newsletter

Delight customers with on-demand metered invoicing and billing.
Oops! Something went wrong while submitting the form.