Thought Leadership

TechCrunch - How Every SaaS Company Can Monetize Generative AI

September 28, 2023

Initially posted on TechCrunch on August 21, 2023

If you work in SaaS, you’ve likely already been part of a conversation at your company about how your customers can benefit with increased value from your products infused with Generative AI, LLMs, or custom AI/ML models.

As you hash out your approach and draw up the product roadmap, I wanted to call out an important aspect – one that I couldn’t help but draw an analogy to the good ol’ California Gold Rush. Don’t show up to the gold rush without a shovel! Similarly, don’t overlook the monetization aspect of your SaaS + AI. Factor it in at the outset and integrate the right plumbing at the start – not as an afterthought or post launch.

Two years ago, I wrote about the inevitable shift to metered pricing for SaaS. The catalyst that would propel the shift was unknown at the time, but the foundational thesis was intact. No one could have predicted back in 2021 that a particular form of AI would serve to be that catalyst.

SaaS + AI. What got you here, won’t get you there!

First thing to realize is that what is required is not merely a “pricing” change. It is a business model change. Traditionally, SaaS pricing has been a relatively lightweight exercise with a simple per seat model and a price point set sufficiently high above underlying costs to attain desired margins. A pricing change would be a change in what you charge; for example, going from $79 per user/month to $99 per user/month. A monetization model change is a fundamental shift in how you charge, and with AI as a consumption vector, it inevitably requires a need for accurate metering and usage-based pricing models.

There’s already a handful of great examples of companies leveraging usage-based pricing to monetize AI including OpenAI and all companies that provide foundational AI models and services, and the likes of Twilio, Snap, Quizlet, Instacart, and Shopify who are integrating with these services to offer customer-facing tooling.

Why usage-based pricing is a natural fit for generative AI

One challenge of monetizing generative AI is that the prompts and outputs vary in length, and the prompt/output size and resource consumption are directly related – with a larger prompt requiring greater resources to process and vice versa. Adding to the complexity, one customer may only use the tool sparingly while another could be generating new text multiple times daily for weeks on end, resulting in a much larger cost footprint. Any viable pricing model must account for this variability and scale accordingly.

On top of this, services like ChatGPT are themselves priced according to a usage-based model. This means that any tools leveraging ChatGPT or other models will be billed based on the usage; since the backend costs of providing service are inherently variable, the customer-facing billing should be usage-based as well.

To deliver the most fair and transparent pricing, and enable frictionless adoption and user growth, companies should look to usage-based pricing. Having both elastic frontend usage and backend costs position generative AI products as ideal fits with a usage-based.

How to get started

Meter frontend usage and backend resource consumption

Companies leverage pre-built or trained models from a plethera of companies and may further train them with their custom dataset and then incorporate them into their technology stack as features. To obtain complete visibility into usage costs and margins, each usage call (be it API or direct) to AI infrastructure should be metered to understand the usage (underlying cost footprint). How much resources were consumed to service the request. E.g. token counts, duration, result size, frequency, and any other performance metrics.

You may choose to do any of the following:

  • Add a markup over the underlying GenAI providers cost structure. 
  • Price the customer facing pricing plans on a tiered model based on volume consumption
  • Create a hybrid charge vectors (some that carry forward GenAIs cost modeks, in addition to new vectors that are unique to your products or services)

By metering both the customer-facing charge vectors and the corresponding backend consumption vectors, companies can create and iterate on uage-based pricing plans, have a real-time view into business KPIs like margin and costs, as well as technical KPIs like service performance and overall traffic. After creating the meters, deploy them to the solution or application where events are originating to begin tracking real-time usage.

Track usage, margins, and account health for all customers

Once the metering infrastructure is deployed, begin visualizing usage and costs in real time as usage occurs and customers leverage the generative services. Identify power users and lagging accounts and empower customer-facing teams with contextual data to provide value at every touchpoint.

Since generative AI services like ChatGPT use a token-based billing model, obtain granular token-level consumption information for each customer using your service. This helps to inform customer-level margins and usage for AI services in your products, and is valuable intel going into sales and renewal conversations. Without a highly accurate and available real-time metering service, this level of fidelity into customer-level consumption, costs, and margins would not be possible.

Launch and iterate with flexible usage-based pricing

After deploying meters to track the usage and performance of the generative AI solution, the next step is to monetize this usage with usage-based pricing. Identify the value metrics that customers should be charged for. For text generation this could be the markup over tokens or underlying resources used or the total processing time to serve the response, for image generation it could be the size of the input prompt, the resolution of the image generated, or the number of images generated. Commonly, the final pricing will be built from some combination of multiple factors like those described.

After creating the pricing plan and assigning to customers, real-time usage will be tracked by metering, and rated and billed with the pricing engine. The on-demand invoice will need to be kept up-to-date, so at any time both the vendor or customers can view current usage and charges.

Integrate with your existing tools for next-generation customer success

The final step once metering is deployed and the billing service is configured is to integrate with third-party tools inside your organization to make usage and billing data visible and actionable. Integrate with CRM tooling to augment customer records with live usage data or help streamline support ticket resolution.

With real-time usage data being collected, integrate this system with finance and accounting tools for usage-based revenue recognition, invoice tracking, and other tasks.

Conclusion

The emergence of ChatGPT welcomed a new era of interest and investment in artificial intelligence (AI) technology. Consequently, there is an ongoing boom of new products and services coming to market that integrate with ChatGPT and similar tools to deliver customer-facing GenAI capabilities, with ongoing discussions about pricing and go-to-market.

Don’t show up to the gold rush without a shovel. As you experiment with leveraging Generative AI and building it into your applications, set up usage-based metering in parallel so that you have a deeper understanding of how your customers are using the application and where they are getting value. From there, leverage these insights to build a transparent and fair business model that’s profitable at scale.

Thought Leadership

TechCrunch - How Every SaaS Company Can Monetize Generative AI

September 28, 2023

Our top 10 Javascript frameworks to use in 2022

JavaScript frameworks make development easy with extensive features and functionalities. Here are our top 10 to use in 2022.
Mountains
Written by
Alec Whitten
Published on
17 January 2022

Initially posted on TechCrunch on August 21, 2023

If you work in SaaS, you’ve likely already been part of a conversation at your company about how your customers can benefit with increased value from your products infused with Generative AI, LLMs, or custom AI/ML models.

As you hash out your approach and draw up the product roadmap, I wanted to call out an important aspect – one that I couldn’t help but draw an analogy to the good ol’ California Gold Rush. Don’t show up to the gold rush without a shovel! Similarly, don’t overlook the monetization aspect of your SaaS + AI. Factor it in at the outset and integrate the right plumbing at the start – not as an afterthought or post launch.

Two years ago, I wrote about the inevitable shift to metered pricing for SaaS. The catalyst that would propel the shift was unknown at the time, but the foundational thesis was intact. No one could have predicted back in 2021 that a particular form of AI would serve to be that catalyst.

SaaS + AI. What got you here, won’t get you there!

First thing to realize is that what is required is not merely a “pricing” change. It is a business model change. Traditionally, SaaS pricing has been a relatively lightweight exercise with a simple per seat model and a price point set sufficiently high above underlying costs to attain desired margins. A pricing change would be a change in what you charge; for example, going from $79 per user/month to $99 per user/month. A monetization model change is a fundamental shift in how you charge, and with AI as a consumption vector, it inevitably requires a need for accurate metering and usage-based pricing models.

There’s already a handful of great examples of companies leveraging usage-based pricing to monetize AI including OpenAI and all companies that provide foundational AI models and services, and the likes of Twilio, Snap, Quizlet, Instacart, and Shopify who are integrating with these services to offer customer-facing tooling.

Why usage-based pricing is a natural fit for generative AI

One challenge of monetizing generative AI is that the prompts and outputs vary in length, and the prompt/output size and resource consumption are directly related – with a larger prompt requiring greater resources to process and vice versa. Adding to the complexity, one customer may only use the tool sparingly while another could be generating new text multiple times daily for weeks on end, resulting in a much larger cost footprint. Any viable pricing model must account for this variability and scale accordingly.

On top of this, services like ChatGPT are themselves priced according to a usage-based model. This means that any tools leveraging ChatGPT or other models will be billed based on the usage; since the backend costs of providing service are inherently variable, the customer-facing billing should be usage-based as well.

To deliver the most fair and transparent pricing, and enable frictionless adoption and user growth, companies should look to usage-based pricing. Having both elastic frontend usage and backend costs position generative AI products as ideal fits with a usage-based.

How to get started

Meter frontend usage and backend resource consumption

Companies leverage pre-built or trained models from a plethera of companies and may further train them with their custom dataset and then incorporate them into their technology stack as features. To obtain complete visibility into usage costs and margins, each usage call (be it API or direct) to AI infrastructure should be metered to understand the usage (underlying cost footprint). How much resources were consumed to service the request. E.g. token counts, duration, result size, frequency, and any other performance metrics.

You may choose to do any of the following:

  • Add a markup over the underlying GenAI providers cost structure. 
  • Price the customer facing pricing plans on a tiered model based on volume consumption
  • Create a hybrid charge vectors (some that carry forward GenAIs cost modeks, in addition to new vectors that are unique to your products or services)

By metering both the customer-facing charge vectors and the corresponding backend consumption vectors, companies can create and iterate on uage-based pricing plans, have a real-time view into business KPIs like margin and costs, as well as technical KPIs like service performance and overall traffic. After creating the meters, deploy them to the solution or application where events are originating to begin tracking real-time usage.

Track usage, margins, and account health for all customers

Once the metering infrastructure is deployed, begin visualizing usage and costs in real time as usage occurs and customers leverage the generative services. Identify power users and lagging accounts and empower customer-facing teams with contextual data to provide value at every touchpoint.

Since generative AI services like ChatGPT use a token-based billing model, obtain granular token-level consumption information for each customer using your service. This helps to inform customer-level margins and usage for AI services in your products, and is valuable intel going into sales and renewal conversations. Without a highly accurate and available real-time metering service, this level of fidelity into customer-level consumption, costs, and margins would not be possible.

Launch and iterate with flexible usage-based pricing

After deploying meters to track the usage and performance of the generative AI solution, the next step is to monetize this usage with usage-based pricing. Identify the value metrics that customers should be charged for. For text generation this could be the markup over tokens or underlying resources used or the total processing time to serve the response, for image generation it could be the size of the input prompt, the resolution of the image generated, or the number of images generated. Commonly, the final pricing will be built from some combination of multiple factors like those described.

After creating the pricing plan and assigning to customers, real-time usage will be tracked by metering, and rated and billed with the pricing engine. The on-demand invoice will need to be kept up-to-date, so at any time both the vendor or customers can view current usage and charges.

Integrate with your existing tools for next-generation customer success

The final step once metering is deployed and the billing service is configured is to integrate with third-party tools inside your organization to make usage and billing data visible and actionable. Integrate with CRM tooling to augment customer records with live usage data or help streamline support ticket resolution.

With real-time usage data being collected, integrate this system with finance and accounting tools for usage-based revenue recognition, invoice tracking, and other tasks.

Conclusion

The emergence of ChatGPT welcomed a new era of interest and investment in artificial intelligence (AI) technology. Consequently, there is an ongoing boom of new products and services coming to market that integrate with ChatGPT and similar tools to deliver customer-facing GenAI capabilities, with ongoing discussions about pricing and go-to-market.

Don’t show up to the gold rush without a shovel. As you experiment with leveraging Generative AI and building it into your applications, set up usage-based metering in parallel so that you have a deeper understanding of how your customers are using the application and where they are getting value. From there, leverage these insights to build a transparent and fair business model that’s profitable at scale.

Flat Pricing Is Dead.
Explore metering and usage based billing with our advance platform.
Developer friendly, and built with LLMs in mind
Book Demo

Initially posted on TechCrunch on August 21, 2023

If you work in SaaS, you’ve likely already been part of a conversation at your company about how your customers can benefit with increased value from your products infused with Generative AI, LLMs, or custom AI/ML models.

As you hash out your approach and draw up the product roadmap, I wanted to call out an important aspect – one that I couldn’t help but draw an analogy to the good ol’ California Gold Rush. Don’t show up to the gold rush without a shovel! Similarly, don’t overlook the monetization aspect of your SaaS + AI. Factor it in at the outset and integrate the right plumbing at the start – not as an afterthought or post launch.

Two years ago, I wrote about the inevitable shift to metered pricing for SaaS. The catalyst that would propel the shift was unknown at the time, but the foundational thesis was intact. No one could have predicted back in 2021 that a particular form of AI would serve to be that catalyst.

SaaS + AI. What got you here, won’t get you there!

First thing to realize is that what is required is not merely a “pricing” change. It is a business model change. Traditionally, SaaS pricing has been a relatively lightweight exercise with a simple per seat model and a price point set sufficiently high above underlying costs to attain desired margins. A pricing change would be a change in what you charge; for example, going from $79 per user/month to $99 per user/month. A monetization model change is a fundamental shift in how you charge, and with AI as a consumption vector, it inevitably requires a need for accurate metering and usage-based pricing models.

There’s already a handful of great examples of companies leveraging usage-based pricing to monetize AI including OpenAI and all companies that provide foundational AI models and services, and the likes of Twilio, Snap, Quizlet, Instacart, and Shopify who are integrating with these services to offer customer-facing tooling.

Why usage-based pricing is a natural fit for generative AI

One challenge of monetizing generative AI is that the prompts and outputs vary in length, and the prompt/output size and resource consumption are directly related – with a larger prompt requiring greater resources to process and vice versa. Adding to the complexity, one customer may only use the tool sparingly while another could be generating new text multiple times daily for weeks on end, resulting in a much larger cost footprint. Any viable pricing model must account for this variability and scale accordingly.

On top of this, services like ChatGPT are themselves priced according to a usage-based model. This means that any tools leveraging ChatGPT or other models will be billed based on the usage; since the backend costs of providing service are inherently variable, the customer-facing billing should be usage-based as well.

To deliver the most fair and transparent pricing, and enable frictionless adoption and user growth, companies should look to usage-based pricing. Having both elastic frontend usage and backend costs position generative AI products as ideal fits with a usage-based.

How to get started

Meter frontend usage and backend resource consumption

Companies leverage pre-built or trained models from a plethera of companies and may further train them with their custom dataset and then incorporate them into their technology stack as features. To obtain complete visibility into usage costs and margins, each usage call (be it API or direct) to AI infrastructure should be metered to understand the usage (underlying cost footprint). How much resources were consumed to service the request. E.g. token counts, duration, result size, frequency, and any other performance metrics.

You may choose to do any of the following:

  • Add a markup over the underlying GenAI providers cost structure. 
  • Price the customer facing pricing plans on a tiered model based on volume consumption
  • Create a hybrid charge vectors (some that carry forward GenAIs cost modeks, in addition to new vectors that are unique to your products or services)

By metering both the customer-facing charge vectors and the corresponding backend consumption vectors, companies can create and iterate on uage-based pricing plans, have a real-time view into business KPIs like margin and costs, as well as technical KPIs like service performance and overall traffic. After creating the meters, deploy them to the solution or application where events are originating to begin tracking real-time usage.

Track usage, margins, and account health for all customers

Once the metering infrastructure is deployed, begin visualizing usage and costs in real time as usage occurs and customers leverage the generative services. Identify power users and lagging accounts and empower customer-facing teams with contextual data to provide value at every touchpoint.

Since generative AI services like ChatGPT use a token-based billing model, obtain granular token-level consumption information for each customer using your service. This helps to inform customer-level margins and usage for AI services in your products, and is valuable intel going into sales and renewal conversations. Without a highly accurate and available real-time metering service, this level of fidelity into customer-level consumption, costs, and margins would not be possible.

Launch and iterate with flexible usage-based pricing

After deploying meters to track the usage and performance of the generative AI solution, the next step is to monetize this usage with usage-based pricing. Identify the value metrics that customers should be charged for. For text generation this could be the markup over tokens or underlying resources used or the total processing time to serve the response, for image generation it could be the size of the input prompt, the resolution of the image generated, or the number of images generated. Commonly, the final pricing will be built from some combination of multiple factors like those described.

After creating the pricing plan and assigning to customers, real-time usage will be tracked by metering, and rated and billed with the pricing engine. The on-demand invoice will need to be kept up-to-date, so at any time both the vendor or customers can view current usage and charges.

Integrate with your existing tools for next-generation customer success

The final step once metering is deployed and the billing service is configured is to integrate with third-party tools inside your organization to make usage and billing data visible and actionable. Integrate with CRM tooling to augment customer records with live usage data or help streamline support ticket resolution.

With real-time usage data being collected, integrate this system with finance and accounting tools for usage-based revenue recognition, invoice tracking, and other tasks.

Conclusion

The emergence of ChatGPT welcomed a new era of interest and investment in artificial intelligence (AI) technology. Consequently, there is an ongoing boom of new products and services coming to market that integrate with ChatGPT and similar tools to deliver customer-facing GenAI capabilities, with ongoing discussions about pricing and go-to-market.

Don’t show up to the gold rush without a shovel. As you experiment with leveraging Generative AI and building it into your applications, set up usage-based metering in parallel so that you have a deeper understanding of how your customers are using the application and where they are getting value. From there, leverage these insights to build a transparent and fair business model that’s profitable at scale.

Subscribe to our Newsletter

Delight customers with on-demand metered invoicing and billing.
Oops! Something went wrong while submitting the form.