Today: June 13, 2024 9:48 am
A collection of Software and Cloud patterns with a focus on the Enterprise

Google Cloud pricing data via API

Accurate pricing for complex cloud projects is a key requirement for success. Unfortunately the sheer number of variables and the ever changing landscape of pricing, shifting of regions, etc. makes it extremely difficult to keep a pricing model up to date as the project evolves.

Google Cloud anticipated that when they created Using this API, it’s possible to get up to the moment pricing for all Google Cloud SKUs. These pricing details can then be added to a Google Sheet or other dataset to create flexible cost models for projects of any complexity.

A Google Storage Example

I recently created a cost proposal that involved backing up dozens of data centers across various cloud regions to Cloud Storage. I created the following script to quickly get pricing for the various storage classes and regions.

import requests
import json
import argparse
with open('storage-pricing.conf.json') as f:
  conf = json.load(f)
def printstorageskus(serviceid, regions, classdescriptionselector):
    # make the API call and load the response into a dictionary
    r = requests.get('' % (serviceid, conf['apikey']))
    storagedata = json.loads(r.text)
    # loop through skus and print details about any that match function arguments
    for item in storagedata['skus']:
        if 'geoTaxonomy' in item and any(reg in item['geoTaxonomy']['regions'] for reg in regions):
            if any(desc in item['description'] for desc in classdescriptionselector):
                for pricing in item['pricingInfo']:
                    for tiers in pricing['pricingExpression']['tieredRates']:
                        unitprice = tiers['unitPrice']['nanos']*10e-10
                        displayprice = unitprice * pricing['pricingExpression']['displayQuantity']
                        print('"%s", "%0.9f", "%s", "%s"' % (item['description'], displayprice, pricing['pricingExpression']['usageUnitDescription'], item['skuId']))
def printservices():
    # make the API call and load the response into a dictionary
    r = requests.get('' % conf['apikey'])
    servicedata = json.loads(r.text)
    # loop through services and print ID: Name
    for item in servicedata['services']:
        print("%s: %s" % (item['serviceId'], item['displayName']))
# command line management
parser = argparse.ArgumentParser(
    description='Get storage pricing through the API',
    epilog='Example: python --regions europe-west3 europe-west4 --classes Standard',
parser.add_argument('--list-services', action='store_true', help='Print a listing of all Google Cloud Services with IDs and exit')
parser.add_argument('--serviceid', help='The Service ID to use when pulling SKUs', default='95FF-2EF5-5EA1') # 95FF-2EF5-5EA1 is cloud storage
parser.add_argument('--regions', nargs='+', help='A list of Regions to include')
parser.add_argument('--classes', nargs='+', help='A list of Storage Classes to include')
args = parser.parse_args()
if args.list_services:
    printstorageskus(args.serviceid, args.regions, args.classes)

The configuration file referenced above looks is a JSON document, as shown below. The APIKEY is generated based on

    "apikey": "APIKEY"

You might notice that I added an option to list all Google Cloud Services, but the remainder of the script is specific to Cloud Storage Service. I haven’t decided if it would be worth making this script more general to accommodate more services or to just updated it as needed for a specific case. Add a comment if you have an opinion or adaptation to share.

The output of the script is as follows:

$ python --regions us-central1 us-west1 --classes Standard Coldline
"Standard Storage US Multi-region", "0.026000000", "gibibyte month", "0D5D-6E23-4250"
"Coldline Storage Oregon (Early Delete)", "0.000133330", "gibibyte day", "30E0-4198-95E0"
"Coldline Storage US Multi-region", "0.007000000", "gibibyte month", "37C4-203D-1024"
"Standard Storage Iowa/South Carolina Dual-region", "0.036000000", "gibibyte month", "420D-7730-6B64"
"Coldline Storage Iowa/South Carolina Dual-region (Early Delete)", "0.000300000", "gibibyte day", "500F-E221-2FDB"
"Coldline Storage Iowa/South Carolina Dual-region", "0.009000000", "gibibyte month", "6E87-6282-15AF"
"Coldline Storage Iowa (Early Delete)", "0.000133330", "gibibyte day", "A03B-4D93-68BB"
"Coldline Storage US Multi-region (Early Delete)", "0.000233330", "gibibyte day", "A373-55C4-6451"
"Coldline Storage Iowa", "0.004000000", "gibibyte month", "D716-3E59-667B"
"Coldline Storage Oregon", "0.004000000", "gibibyte month", "E209-9471-B5A2"
"Standard Storage US Regional", "0.000000000", "gibibyte month", "E5F0-6A5D-7BAD"
"Standard Storage US Regional", "0.020000000", "gibibyte month", "E5F0-6A5D-7BAD"

The output above can be copied and pasted directly into a spreadsheet using csv delimiting and immediately used to calculate pricing. In cases where the customer has contractual discounts for specific SKUs, these discounts can be identified and applied to the cost model.

Tiered SKUs and Discounts

Notice that there are two listings for “Standard Storage US Regional“, one at $0.00 and the other at $0.02. This is a case of a tiered SKU where the cost of the SKU changes based on consumption. In this case the $0.00 entry is the free tier. This simple example doesn’t account for the tiered SKU, but it may be relevant for your use case, so you would need to enhance what is shown here. In some cases customer discounts may phase in or out at tiers, which would also impact the pricing model.

Contract Pricing

In some cases, a customer may have contracted prices that differ from the current list prices. In those cases, you will want to use those locked in contract rates in all calculations.

An Example Google Sheet

When pasting in the above into a Google Sheet, there is an option to break it into separate fields based on a delimiter, such as comma separated in this case.

In addition to the possible discounts, other factors to consider include deduplication, compression, block size for multipart or parallel uploads, since those will impact how much data is transferred, stored and how many API calls are required to accomplish it. In a backup scenario, it may be helpful to include anticipated monthly incremental increase in total data. After working in these factors, the following is one way this spreadsheet might look.

There are obviously a lot of ways to slice and dice pricing. Hopefully this little python script to access the Google Cloud catalog API will be useful.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.