Optimizing PMAX : Google Ads Scripts and Custom AI-Driven Actions

Are you running Performance Max (PMAX) ? Have you noticed a drop in performance but can’t figure out why—even when your optimization score is at 95% and Ad Strength is Excellent? Do you really trust automation?

While there’s plenty of debate about the best way to scale results on Google, we’ve discovered that sustainable, high Google Ads growth comes from combining coding with a sound, tiered cross-channel structure. Yes, it’s a throwback to the days when being a tech-savvy marketer was key to driving success in growth marketing. And yes, that’s exactly what we do. 

What Are Google Ads Scripts?

Google Ads Scripts provide a way to programmatically control your Google Ads campaigns using simple JavaScript. These scripts allow you to automate routine tasks and interact with external data and systems.

Read more here: Google Ads Scripts Documentation

Through automated yet controllable optimizations, scripts can automate tasks like bid adjustments, advanced exclusions management, and performance monitoring, yielding better control over campaigns and moving away from a set and see approach (or if you hate being a robot) 

What’s Needed:

  • Basic JavaScript: An entry-level understanding of JavaScript.

  • Logic, Creativity, and AI: Combine brains, creativity, and the latest AI tools.

  • Tech-Savvy Support: Consider a tech-based service provider with proven expertise. (Research their team on LinkedIn and Github to confirm their creds.)

Practical Applications for PMAX

One of the challenges with PMAX is that its automation can lead to irrelevant ad placements or audience overlaps (and weird ad text) . For example, in some of our campaigns, ads repeatedly appeared on unrelated blogs, political sites, or even the Daily Mail—right in the middle of Black Friday, which also happened to be an election year.

By using Google Ads Scripts, you can refine exclusions dynamically, ensuring your ads align better with your broader growth plan. This level of control can prevent wasted spend and improve the overall effectiveness of your PMAX campaigns. 

We’ve decided to give some scripts away. Here are some actionable scripts: Dynamically manage exclusions, optimize audience layering, and rebalance budgets—all without endless manual adjustments. We then take this endless flow of data and re-add placements using Pytorch to actually build intelligence.

Exclude Low-Performing Placements:

(After you get this, you need to add it here : Tools > Content Suitability > Advanced Settings under your Google Ads Account)

function main() {

  var placementThreshold = 100; // Minimum impressions for evaluation

  var performanceCutoff = 0.05; // CTR threshold for exclusion


  var report = AdsApp.report(

    "SELECT CampaignName, Placement, Impressions, Clicks " +

    "FROM PLACEMENT_PERFORMANCE_REPORT " +

    "WHERE CampaignName CONTAINS 'PMAX' AND Impressions > " + placementThreshold

  );


  var rows = report.rows();

  while (rows.hasNext()) {

    var row = rows.next();

    var ctr = row['Clicks'] / row['Impressions'];

    if (ctr < performanceCutoff) {

      AdsApp.placements()

        .withCondition("Placement = '" + row['Placement'] + "'")

        .exclude();

      Logger.log('Excluded low-performing placement: ' + row['Placement']);

    }

  }

}

Audience Layering Adjustments

This will tell you what audiences overlap and which audiences perform. Probably a good idea to bid higher and or restructure accordingly. 

function main() {

  var audienceThreshold = 50; // Minimum conversions for evaluation

  var performanceCutoff = 1.5; // Cost-per-conversion threshold for exclusion

  var roasThreshold = 3.0; // ROAS threshold for high-performing audiences


  var audienceData = {}; // Object to store audience data for overlap detection

  var overlapThreshold = 1; // Number of campaigns an audience should appear in to be considered an overlap


  // Report to gather audience performance data across campaigns

  var report = AdsApp.report(

    "SELECT AudienceName, Conversions, CostPerConversion, ConversionValue, CampaignName " +

    "FROM AUDIENCE_PERFORMANCE_REPORT " +

    "WHERE Conversions > " + audienceThreshold

  );


  var rows = report.rows();

  while (rows.hasNext()) {

    var row = rows.next();

    var audienceName = row['AudienceName'];

    var campaignName = row['CampaignName'];

    var costPerConversion = parseFloat(row['CostPerConversion']);

    var conversionValue = parseFloat(row['ConversionValue']);

    var roas = conversionValue / costPerConversion;  // Calculate ROAS (Return on Ad Spend)


    // Initialize audience data structure if not already initialized

    if (!audienceData[audienceName]) {

      audienceData[audienceName] = {

        appearances: 0,

        campaigns: [],

        costPerConversion: costPerConversion,

        roas: roas

      };

    }

    // Increment the number of appearances of this audience in campaigns

    audienceData[audienceName].appearances++;

    audienceData[audienceName].campaigns.push(campaignName);


    // Log audience performance

    Logger.log('Audience: ' + audienceName + ' | Campaign: ' + campaignName + ' | ROAS: ' + roas + ' | Cost/Conversion: ' + costPerConversion);


    // Exclude high-cost audiences

    if (costPerConversion > performanceCutoff) {

      AdsApp.targeting().audiences()

        .withCondition("AudienceName = '" + audienceName + "'")

        .exclude();

      Logger.log('Excluded high-cost audience: ' + audienceName);

    }


    // Identify high-performing audiences with a high ROAS

    if (roas >= roasThreshold) {

      Logger.log('High-Value Audience (ROAS > ' + roasThreshold + '): ' + audienceName);

  

    }

  }


  // Check for audience overlaps

  for (var audience in audienceData) {

    if (audienceData[audience].appearances > overlapThreshold) {

      Logger.log('Audience ' + audience + ' overlaps in ' + audienceData[audience].appearances + ' campaigns: ' + audienceData[audience].campaigns.join(", "));

         }

  }

}

Budget Rebalancing for Asset Groups

Budgets should not be linear.

function main() {

  var report = AdsApp.report(

    "SELECT AssetGroupName, Cost, Conversions, ConversionValue " +

    "FROM ASSET_GROUP_PERFORMANCE_REPORT " +

    "WHERE CampaignName CONTAINS 'PMAX'"

  );


  var rows = report.rows();

  var totalBudget = BMGBUDGET; 

  var assetGroupData = [];


  while (rows.hasNext()) {

    var row = rows.next();

    var roas = parseFloat(row['ConversionValue']) / parseFloat(row['Cost']);

    assetGroupData.push({

      name: row['AssetGroupName'],

      roas: roas,

      cost: parseFloat(row['Cost'])

    });

  }


  var totalROAS = assetGroupData.reduce((sum, ag) => sum + ag.roas, 0);

  assetGroupData.forEach(function (ag) {

    var adjustedBudget = (ag.roas / totalROAS) * totalBudget;

    AdsApp.campaigns()

      .withCondition("AssetGroupName = '" + ag.name + "'")

      .get()

      .next()

      .setBudget(adjustedBudget);

    Logger.log('Updated budget for asset group: ' + ag.name + ' to ' + adjustedBudget.toFixed(2));

  });

}

Cool, huh? Then what?

You can parse the script outputs into a CSV or directly connect the Google Ads API to fetch campaign performance data. Using Python, this data can be analyzed and modeled for intelligent optimizations. 

Below is an example workflow: Assuming we’ve exported CSV files from Google Ads scripts.

import pandas as bmgpd  

import requests  

# Load CSV files

placements_df = bmgpd.read_csv("placements_performance.csv")

audiences_df = bmgpd.read_csv("audiences_performance.csv")

asset_groups_df = bmgpd.read_csv("asset_groups_performance.csv")

# Merge datasets for holistic analysis

merged_df = bmgpd.merge(placements_df, audiences_df, on="CampaignName", how="inner")

merged_df = bmgpd.merge(merged_df, asset_groups_df, on="CampaignName", how="inner")

# Example: Filter low-performing placements based on CTR threshold

performance_threshold = 0.05  # Define the CTR threshold

filtered_df = merged_df[merged_df['CTR'] >= performance_threshold]  # Filter placements based on CTR

# Identify high-cost audiences

high_cost_audiences = audiences_df[audiences_df['CostPerConversion'] > 1.5]

api_url = "https://pable.ai/"  

# Convert high-cost audiences to JSON (or any required format by your application)

data_to_push = high_cost_audiences.to_json(orient='records')

# Push the data to the application via a POST request

response = requests.post(api_url, json=data_to_push)

# Check if the request was successful

if response.status_code == 200:

    print("Data successfully pushed to the application!")

else:

    print(f"Failed to push data. Status code: {response.status_code}")

Build a Neural Network for Optimization

With PyTorch, let's create a custom neural network that learns from past trends and makes real-time, data-driven predictions

import torch

import torch.nn as nn

import torch.optim as optim

from sklearn.preprocessing import MinMaxScaler

from sklearn.model_selection import train_test_split

import numpy as np

# Prepare data for training

features = merged_df[['Impressions', 'Clicks', 'Conversions', 'Cost']].values

labels = merged_df['ROAS'].values

# Normalize data

scaler = MinMaxScaler()

features = scaler.fit_transform(features)

# Train/Test split

X_train, X_test, y_train, y_test = train_test_split(features, labels, test_size=0.2, random_state=42)

# Convert data to PyTorch tensors

X_train = torch.tensor(X_train, dtype=torch.float32)

X_test = torch.tensor(X_test, dtype=torch.float32)

y_train = torch.tensor(y_train, dtype=torch.float32).view(-1, 1)

y_test = torch.tensor(y_test, dtype=torch.float32).view(-1, 1)

# Define the neural network model

class ROASModel(nn.Module):

    def __init__(self):

        super(ROASModel, self).__init__()

        self.fc1 = nn.Linear(X_train.shape[1], 64)

        self.fc2 = nn.Linear(64, 32)

        self.fc3 = nn.Linear(32, 1)

        self.relu = nn.ReLU()


    def forward(self, x):

        x = self.relu(self.fc1(x))

        x = self.relu(self.fc2(x))

        x = self.fc3(x)

        return x


model = ROASModel()

# Define loss function and optimizer

criterion = nn.MSELoss()

optimizer = optim.Adam(model.parameters(), lr=0.001)

# Train the model

epochs = 50

batch_size = 16

n_batches = int(np.ceil(X_train.shape[0] / batch_size))

for epoch in range(epochs):

    model.train()

    epoch_loss = 0

    for batch in range(n_batches):

        start_idx = batch * batch_size

        end_idx = min((batch + 1) * batch_size, X_train.shape[0])

      

        batch_X = X_train[start_idx:end_idx]

        batch_y = y_train[start_idx:end_idx]

        

        optimizer.zero_grad()

        outputs = model(batch_X)

        loss = criterion(outputs, batch_y)

        loss.backward()

        optimizer.step()

        epoch_loss += loss.item()

    

    print(f"Epoch {epoch+1}/{epochs}, Loss: {epoch_loss/n_batches}")


# Predict on test data

model.eval()

with torch.no_grad():

    predictions = model(X_test)

print("Predicted ROAS Adjustments:")

print(predictions.numpy())

With the model in place —what now?

So, what do we do with all this data and intelligence? 

  • Adjust the budget? Hope not!

  • Do we just keep pausing placements and reading reports? 

  • Should we jump into TensorFlow? But is our TensorFlow even... right?

Maybe we’re all just stuck thinking Pandas = Panda.

What if we could dynamically bring back those previously excluded placements that might actually work in the future? Like those political YouTube channels—who knows, they could surprise us and drive results! 

Or could we actually build a custom model? 

Want to explore this further? Contact us and let’s dive deeper into the possibilities!

上一頁
上一頁

Understanding the Diverse Mainland Chinese Population in Hong Kong : Effective Consumer Banking Media Strategies

下一頁
下一頁

Social Commerce in 2025