[go: nahoru, domu]

Posted by Romain Vialard, a Google Developer Expert and developer of Yet Another Mail Merge, a Google Sheets add-on.

Google Apps Script makes it easy to create and publish add-ons for Google Sheets, Docs, and Forms. There are now hundreds of add-ons available and many are reaching hundreds of thousands of users. Google Analytics is one of the best tools to learn what keeps those users engaged and what should be improved to make an add-on more successful.

Cookies and User Identification

Add-ons run inside Google Sheets, Docs, and Forms where they can display content in dialogs or sidebars. These custom interfaces are served by the Apps Script HTML service, which offers client-side HTML, CSS, and JS with a few limitations.

Among those limitations, cookies aren’t persistent. The Google Analytics cookie will be recreated each time a user re-opens your dialog or sidebar, with a new client ID every time. So, Analytics will see each new session as if initiated by a new user, meaning the number of sessions and number of users should be very similar.


Fortunately, it’s possible to use localStorage to store the client ID — a better way to persist user information instead of cookies. After this change, your user metrics should be far more accurate.

Add-ons can also run via triggers, executing code at a recurring interval or when a user performs an action like opening a document or responding to a Google Form. In those cases, there’s no dialog or sidebar, so you should use the Google Analytics Measurement Protocol (see policies on the use of this service) to send user interaction data directly to Google Analytics servers via the UrlFetch service in Google Apps Script.

A Client ID is also required in that case, so I recommend using the Apps Script User properties service. Most examples on the web show how to generate a unique Client ID for every call to Analytics but this won’t give you an accurate user count.

You can also send the client ID generated on client side to the server so as to use the same client ID for both client and server calls to Analytics, but at this stage, it is best to rely on the optional User ID in Google Analytics. While the client ID represents a client / device, the User ID is unique to each user and can easily be used in add-ons as users are authenticated. You can generate a User ID on the server side, store it among the user properties, and reuse it for every call to Analytics (both on the client and the server side).

Custom Dimensions & Metrics

In add-ons, we usually rely on event tracking and not page views. It is possible to add different parameters on each event thanks to categories, actions, labels and value, but it’s also possible to add much more info by using custom dimensions & metrics.

For example, the Yet Another Mail Merge add-on is mostly used to send emails, and we have added many custom dimensions to better understand how it is used. For each new campaign (batch of emails sent), we record data linked to the user (e.g. free or paying customer, gmail.com or Google for Work / EDU user) and data linked to the campaign (e.g. email size, email tracking activated or not). You can then reuse those custom dimensions inside custom reports & dashboards.


Once you begin to leverage all that, you can get very insightful data. Until October 2015, Yet Another Mail Merge let you send up to 100 emails per day for free. But we’ve discovered with Analytics that most people sending more than 50 emails in one campaign were actually sending 100 emails - all the free quota they could get - but we failed to motivate them to switch to our paid plan.


As a result of this insight, we have reduced this free plan to 50 emails/day and at the same time introduced a referral program, letting users get more quota for free (they still don’t pay but they invite more users so it’s interesting for us). With this change, we have greatly improved our revenue and scaled user growth.

Or course, we also use Google Analytics to track the efficiency of our referral program.

To help you get started in giving you more insight into your add-ons, below are some relevant pages from our documentation on the tools described in this post. We hope this information will help your apps become more successful!:


Romain Vialard profile | website

Romain Vialard is a Google Developer Expert. After some years spent as a Google Apps consultant, he is now focused on products for Google Apps users, including add-ons such as Yet Another Mail Merge and Form Publisher.

Editor's note: Posted by Romain Vialard, a Google Developer Expert and developer of Yet Another Mail Merge, a Google Sheets add-on.

Yet Another Mail Merge is a Google Sheets add-on that lets users send multiple personalized emails based on a template saved as a draft in Gmail and data in a Google Sheet. It can send hundreds of emails, but this kind of operation usually takes a few minutes to complete. This raises the question: what should be displayed in the user interface while a function is running on server side for a long time?


Real-time notifications in Add-ons

Firebase is all about real-time and became the answer to that issue. Last December, the Apps Script team announced a better version of the HtmlService with far fewer restrictions and the ability to use external JS libraries. With Firebase, we now had a solution to easily store and sync data in real-time.

Combined, users are able to know, in real-time, the number of emails sent by an Apps Script function running server-side. When the user starts the mail merge, it calls the Apps Script function that sends emails and connects to Firebase at the same time. Every time the Apps Script function has finished sending a new email, it increments a counter on Firebase and the UI is updated in real-time, as shown in the following image.


Implementation

Inside the loop, each time an email is sent (i.e. each time we use the method GmailApp.sendEmail()), we use the Apps Script UrlFetch service to write into Firebase using its REST API. Firebase's capabilities makes this easy & secure and there’s no need for an OAuth Authorization Flow, just a Firebase app secret, as shown in the following example:

function addNewUserToFirebase() {
  var dbUrl = "https://test-apps-script.firebaseio.com";
  var secret = PropertiesService.getScriptProperties().getProperty("fb-secret");
  var path = "/users/";
  var userData = {
    romainvialard:{
      firstName:"Romain", 
      lastName:"Vialard",
      registrationDate: new Date()
    }
  };
  var params = {
    method: "PUT",
    payload : JSON.stringify(userData)
  }
   UrlFetchApp.fetch(dbUrl + path + ".json?auth=" + secret, params);
}

On the client side, thanks to the improved Apps Script HtmlService, we can use the official JS client library to connect to Firebase and retrieve the data stored previously. Specifically, the on() method in this library can be used to listen for data changes at a particular location in our database. So each time a new task is completed on server side (e.g. new email sent), we notify Firebase and the UI is automatically updated accordingly.

var fb = new Firebase("https://test-apps-script.firebaseio.com");
var ref = fb.child('users/' + UID + '/nbOfEmailsSent');
ref.on("value", function(data) {
  if (data.val()) {
    document.getElementById("nbOfEmailsSent").innerHTML = data.val();
  }
});

More Firebase in Add-ons

In addition to the example above, there are other places where Firebase can be useful in Google Apps Script add-ons.

  • “Yet Another Mail Merge” also offers paid plans and it needs to store our customer list. It turns out, Firebase is perfect for that as well. Each time someone buys a plan, our payment tool calls an Apps Script web app which writes the payment details in Firebase. So right after the purchase, the user can open the add-on and a function on server side will call Firebase and see that premium features should now be enabled for this user.
  • Last but not least, at the end of a mail merge, we also use Firebase to provide real-time reporting of emails opened, directly in the sidebar of the spreadsheet.

Those are just a few examples of what you can do with Apps Script and Firebase. Don’t hesitate to try it yourself or install Yet Another Mail Merge to see a live example. In addition, there is a public Apps Script library called FirebaseApp that can help you start with Firebase; use it like any other standard Apps Script library.

For example, you can easily fetch data from Firebase using specific query parameters:

function getFrenchContacts() {
  var firebaseUrl = "https://script-examples.firebaseio.com/";
  var base = FirebaseApp.getDatabaseByUrl(firebaseUrl);
  var queryParameters = {orderBy:"country", equalTo: "France"};
  var data = base.getData("", queryParameters);
  for(var i in data) {
    Logger.log(data[i].firstName + ' ' + data[i].lastName
        + ' - ' + data[i].country);
  }
}

Build your own add-ons via Google Apps Script. Check out the documentation (developers.google.com/apps-script) to get more information as well as try out the Quickstart projects there. We look forward to seeing your add-ons soon!


Romain Vialard profile | website

Romain Vialard is a Google Developer Expert. After some years spent as a Google Apps consultant, he is now focused on products for Google Apps users, including add-ons such as Yet Another Mail Merge and Form Publisher.

Editor’s Note: Guest author Steve Ziegler is a senior architect at BetterCloudArun Nagarajan

FlashPanel, a leading enterprise-grade security and management application for Google Apps, makes use of many Google Apps APIs. Each API comes with its own set of rate-limiting quotas, network traffic behavior and response conditions. The challenge we face is to provide our customers with a stable, consistent experience despite these API behavior differences. Fortunately, Google has provided two simple, yet powerful tools to greatly increase our success rate and reliability.

Controlling Throttle Rates with Guava’s RateLimiter


Google Apps APIs operate with quota rates per second or per day. As such, the FlashPanel development team’s objective is to minimize total operation time of a parallelized set of API-intensive processes while avoiding running into Google’s API quota limits. To accomplish this, we can very easily set up a thread-safe RateLimiter for a specific rate per second for each rate restricted API:

final RateLimiter driveApiRateLimiter = RateLimiter.create(QUOTA_RATE_PER_SECOND);

The Guava RateLimiter allows us to attempt to acquire a permit based on the configured rate per second, block until available, and then take it when available, allowing execution.

To use it within the context of a method, we simply reference the rate limiter before making the API call:

public Result performDriveApiCall(driveApiRateLimiter, otherParams){
        driveApiRateLimiter.acquire(); // blocks according to rate
        // make API call...
}
This provides us with a very easy solution for rate limiting parallelized calls to the same APIs.

Reducing Impact of Network Traffic Issues Through Configured Backoffs


Occasionally, we experience additional network traffic issues despite enforcing rate limits. Google suggests that API clients use an exponential backoff strategy to handle this variety of error response. Typically this category of network issue resolves itself on its own after fractions of a second, but occasionally more time is needed. All we need to do to receive a successful API response is to simply retry the call some number of times, with the interval of time between retries growing exponentially.

Again, Google has made this easy through the Google HTTP Client for Java library’s ExponentialBackOff builder. This class allows us to configure the initial retry time interval, the exponential multiplier, the maximum total time to attempt to retry, and the maximum total time between retries. For example, we could configure a retry to span five minutes, growing with a factor of two, starting at a one second interval, and growing up to a maximum of one minute between retries. An API call with this configuration would retry with the following pattern in terms of seconds between retries:

1, 2, 4, 8, 16, 32, 60, 60, 60

If after this last retry, the API call still was not successful, the failed http response is returned. The code to configure such a strategy using the ExponentialBackOff.Builder reads as follows:

ExponentialBackOff backoff = new ExponentialBackOff.Builder()
    .setInitialIntervalMillis(ONE_SECOND)
    .setMultiplier(2.0)
    .setMaxIntervalMillis(ONE_MINUTE)
    .setMaxElapsedTimeMillis(FIVE_MINUTES)
    .build();

One potential “gotcha” that we’ve seen is if we accidentally slam a particular API with many simultaneous API calls. In this event, not only would each of these API calls fail, but they would also schedule their retry strategy to occur simultaneously. All subsequent retries would end up firing simultaneously, causing the API calls to continue to fail due to excess per second volumes. The ExponentialBackoff class accounts for this by including a randomization factor within our retry logic that allows us to have each simultaneous API call stagger at different intervals.

For example, using our previous backoff but now with randomization, one API call may retry with these intervals:

1.04, 1.9, 4.23, 7.8, etc.

While a second API call would retry with something like these intervals:

.98, 2.04, 4.1, 8.15, etc.

With this randomization, we avoid API call sequencing collision, mitigating our chances of encountering quota related errors. To simply add this type of randomization, we append to our builder:

builder.setRandomizationFactor(RANDOMIZATION_FACTOR);

Once we have our exponential backoff strategy configured, the Google HTTP Client for Java library allows us to instantiate and assign an HttpBackOffUnsuccessfulResponseHandler to an HttpRequest as part of the request’s initialization:

private HttpBackOffUnsuccessfulResponseHandler handler = new HttpBackOffUnsuccessfulResponseHandler(backoff);

public void initialize(HttpRequest request){ 
    request.setUnsuccessfulResponseHandler(handler);
}

Final Thoughts


By restricting our API calls using Google Guava’s easy-to-use RateLimiter, and by employing an exponential backoff strategy using the Google HTTP Client for Java library’s ExponentialBackOff, we are able to significantly reduce the amount of network traffic errors received by our Google Apps API calls, improving FlashPanel’s reliability and the overall user experience.

Steve is Senior Architect at BetterCloud, the makers of FlashPanel, the number one security and management application for Google Apps. Follow BetterCloud on Google+ at plus.google.com/+Bettercloud.

Posted by Greg Knoke, Googler

Editor's note: This is a guest post by Benjamin Coe. Benjamin shares tips and best practices on using Ruby on Rails for integrating with Google Apps and launching on the Marketplace. — Arun Nagarajan
Yesware offers an app in the Google Apps Marketplace which allows our users to schedule reminders, from directly within the Gmail UI. Yesware’s app recently relaunched in the updated Google Apps Marketplace. In prep, we revamped our existing Google Apps Integration:
  • Replacing OpenID with OAuth 2.0 for Single-Sign-On.
  • Replacing 2-legged OAuth with OAuth 2.0 Service Accounts, for delegated account access.
  • Releasing a Gmail Contextual Gadget that worked within these new authentication paradigms.
We should like to share some of the decisions we made, and challenges we faced, upgrading our production Ruby on Rails application to support the improved Google Apps Marketplace.

OAuth 2.0 for SSO

In the revamped Google Apps Marketplace, OAuth 2.0 replaces OpenID for facilitating Single-Sign-On. The flow is as follows:

  1. OAuth 2.0 credentials are created in the Cloud Console, within the same project that has the Google Apps Marketplace SDK enabled.
  2. When accessing your application, a user is put through the standard OAuth 2.0 authentication flow using these credentials.
  3. If the user has the Google Apps Marketplace App installed they will be logged directly into your application, skipping the authorization step.
To implement the OAuth 2.0 authentication flow, you can use the OmniAuth Google OAuth2 Strategy gem. Assuming you're already using OmniAuth, you simply add a line to initializers/omniauth.rb that looks something like this:
Rails.application.config.middleware.use OmniAuth::Builder do
  provider :google_oauth2, ENV["GAM_OAUTH_KEY"], ENV["GAM_OAUTH_SECRET"]
end

Yesware already had a Google OAuth 2.0 authentication strategy, so we opted to subclass the Google OAuth 2.0 OmniAuth Strategy. This allowed us to continue supporting our existing OAuth 2.0 credentials, while adding support for Google Apps Marketplace SSO. Our subclassed strategy looked like this:
# Subclass the GoogleOauth2 Omniauth strategy for
# Google Apps Marketplace V2 SSO.
module OmniAuth
  module Strategies
    class GoogleAppsMarketplace < OmniAuth::Strategies::GoogleOauth2
      option :name, 'google_apps_marketplace'
    end
  end
end
Our final initializers/omniauth.rb file was this:
Rails.application.config.middleware.use OmniAuth::Builder do
  provider :google_oauth2,  ENV["OAUTH_KEY"], 
                            ENV["OAUTH_SECRET"], 
                            {:scope => ENV["OAUTH_SCOPE"]}
  provider :google_apps_marketplace,  ENV["GAM_OAUTH_KEY"],  
                                      ENV["GAM_OAUTH_SECRET"], 
                                      { :scope => ENV["GAM_OAUTH_SCOPE"], 
                                        :access_type => 'online' }end

Note that :access_type is set to online. This is necessary to prevent the authorization prompt from being presented to a SSO user. Omniauth defaults to an :access_type of offline.
That's all it takes. With this OmniAuth strategy in place, when a domain administrator installs your application SSO will be available across the domain.

OAuth 2.0 Service Accounts

To support Yesware's reminder functionality, we needed offline access to a user's email account. In the past, this functionality was supported through 2-legged OAuth. In the new Google Apps Marketplace paradigm, OAuth 2.0 Service Accounts are the replacement.
  • In the Cloud Console, generate a private key for the OAuth 2.0 Service Account associated with your Google Apps Marketplace project. 
  • Download the .p12 private key generated. 
  • Place this key somewhere that will be accessible by your production servers, e.g., a certificates folder in your codebase. 
We used the Google API Ruby Client gem to generate an access token from our Service Account's keys.
Using the deprecated 2-Legged OAuth based approach, our authorization logic looked like this:

Gmail.connect!(:xoauth, 'ben@example.com', {
  token: authentication.token,
  secret: authentication.secret,
  consumer_key: google.key,
  consumer_secret: google.secret,
  read_only: true
})
Using the new Service Account Based Approach, it was as follows:
key = Google::APIClient::PKCS12.load_key(
google_apps.service.p12path, # this is a constant value Google uses
# to password protect the key.
'notasecret'
)service_account = Google::APIClient::JWTAsserter.new(
google_apps.service.email,
'https://mail.google.com/',
key
)client = Google::APIClient.new(
:application_name => APPLICATION_NAME,
:version => APPLICATION_VERSION
).tap do |client|
client.authorization = service_account.authorize('ben@example.com')end
Google.connect!(:xoauth2, 'ben@example.com', {
:oauth2_token => client.authorization.access_token,
})
With OAuth 2.0 Service Accounts, the underlying libraries we used to interact with Gmail remained the same. There were simply a few extra steps necessary to obtain an access token.

Contextual Gadgets and SSO

Yesware provides a Gmail Contextual Gadget, for scheduling email reminders. To facilitate this, it's necessary that the gadget interact with a user's email account. To make this a reality, we needed to implement SSO through our contextual gadget. Google provides great reading material on this topic. However, the approach outlined concentrates on the deprecated OpenID-based SSO approach. We used a slightly modified approach.
Rather than OpenID, we used OAuth 2.0 for associating the opensocial_viewer_id with a user. To do this, we needed to modify our OmniAuth strategy to store the opensocial_viewer_id during authentication:
# Subclass the GoogleOauth2 Omniauth strategy for
# Google Apps Marketplace V2 SSO.
module OmniAuth
  module Strategies
    class GoogleAppsMarketplace < OmniAuth::Strategies::GoogleOauth2

      option :name, 'google_apps_marketplace'

      def request_phase

        # Store the opensocial_viewer_id in the session.
        # this allows us to bind the Google Apps contextual
        # gadget to a user account.
        if request.params['opensocial_viewer_id']
          session[:opensocial_viewer_id] = request.params['opensocial_viewer_id']
        end

        super
      end

    end
  end
end
Once an opensocial_viewer_id was connected to a Yesware user, we could securely make API calls from our contextual gadget. To cut down on the ritual surrounding this, we wrote a Devise Google Apps OpenSocial Strategy for authenticating the OpenSocal signed requests.

Now Go Forth

Once we figured out all the moving parts, we were able to use mostly off the shelf mature libraries for building our Google Apps Marketplace Integration. I hope that this retrospective look at our development process helps other Rails developers hit the ground running even faster than we did.
Benjamin Coe profile

Benjamin Coe cofounded the email productivity company Attachments.me, which was acquired by Yesware, Inc., in  2013. Before starting his own company, Ben was an engineer at FreshBooks, the  world’s #1 accounting solution.
Ben’s in his element when writing scalable cloud-based infrastructure, and loves reflecting on the thought-process that goes into this. A rock-climber, amateur musician, and bagel aficionado, Ben can be found roaming the streets of San Francisco.
ben@yesware.comhttps://github.com/bcoe@benjamincoe

Editor's note: This is a guest post by Thomas Gerber. Thomas is the CTO of Altirnao, the developer of the AODocs document management app on the Google Apps Marketplace. Thomas tells the story of his application and provides some tips for developers considering integrating with Google Apps and launching on the Marketplace. — Arun Nagarajan


Google Drive is increasingly popular in the enterprise, and many organizations would like to leverage it as a replacement for their existing on-premises file servers. Moving physical file servers to Drive provides many benefits, such as reliability, cost-effectiveness and the ability to access the files from anywhere and any device. However, the storage structure of Google Drive, where files are owned by many different users, is significantly different from the centralized organization of a file server, where everything is under the control of a small number of system administrators.

To address this problem, AODocs uses the Google Drive API to automatically transfer the ownership of files to a system account, and thus create a sort of “managed area” within Google Drive. With the Google Drive API, AODocs has complete control over the folder structure and the permissions of files owned by this system account. AODocs can be deployed in one click from the Google Apps Marketplace, which makes our application visible (and easy to try out!) for every Google Apps administrator in the world.



Companies who want to store their files on Google Drive may be concerned about losing control of their data (e.g. access to files being lost when an employee leaves the company) and controlling sharing permissions.

AODocs uses a single system account (i.e. a Google Apps account belonging to the customer’s domain, but not used by any human person) as a “proxy” to control the files. When a Google Drive files is added to an AODocs library, the ownership of the file is transferred to the AODocs system account and the file’s writersCanShare attribute is set to false, so that only AODocs is able to modify the file’s permissions afterwards.

To change the ownership of the file, we check if the system account can already access the file, and then either insert a new “owner” permission on it or use the Permissions.update method with the transferOwnership flag:

public void changeOwner(String user, String fileId, String newOwner) {
   // Find what is the current permission of the new owner on the file
   Permission newOwnerPermission = null;
   PermissionList permissionList = RetriableTask.execute(new DrivePermissionListTask(drive.permissions().list(fileId)));
   newOwnerPermission = findPermission(permissionList, newOwner);

   if (newOwnerPermission == null) {
      // New owner is not in the list, we need to insert it
      newOwnerPermission = new Permission();
      newOwnerPermission.setValue(newOwner);
      newOwnerPermission.setType("user");
      newOwnerPermission.setRole("owner");
      Drive.Permissions.Insert insert = drive.permissions().insert(fileId, newOwnerPermission);
      RetriableTask.execute(new DrivePermissionInsertTask(insert)); 
   } else {
      // New owner is already in the list, update the existing permission
      newOwnerPermission.setRole("owner");
      Drive.Permissions.Update update = drive.permissions().update(fileId, newOwnerPermission.getId(), newOwnerPermission);
      update.setTransferOwnership(true);
      RetriableTask.execute(new DrivePermissionUpdateTask(update));
   }
}

Since all the files are owned by the system account, AODocs completely controls the lifecycle of the file (how they are created, in which folder they are located, who can change their permissions, who can delete them, etc). AODocs can thus provide higher-level document management features on top of Google Drive, such as configuring the retention time of deleted files, limiting external sharing to a whitelist of “trusted external domains”, or recording an audit log of file modifications.

As illustrated in the code snippet above, AODocs relies on the Google Drive API to perform all the operations on the managed files. The main challenge we had when using the Drive API was to properly handle all the error codes returned by the API calls, and make sure we make the difference between fatal errors that should not be tried again (for example, permission denied on a file) and the temporary errors that should be re-tried later (for example, “rate limit exceeded”). To handle that, we have encapsulated all our Google Drive API calls (we are using the Java client library) into a class named RetriableTask, which is responsible for handling the non-fatal errors and automatically retry the API calls with the proper exponential back-off. Here is a simplified version:

public class RetriableTask implements Callable {
[...]
  private final Callable task;

[...]
  @Override public T call() {
     T result = null;
     try {
        startTime = System.currentTimeMillis();
        result = task.call();
     } catch (NonFatalErrorException e) {
        if (numberOfTriesLeft > 0) {
          // Wait some time, using exponential back-off in case of multiple attempts
          Thread.sleep(getWaitTime());

          // Try again
          result = call();
        } else {
          // Too many failed attempts: now this is a fatal error
          throw new RetryException();
        }
     } catch (FatalErrorException e) {
        // This one should not be retried
        Throwables.propagate(e);
     }
     return result;
  }

AODocs is designed to work seamlessly with Google Drive, and our top priority is to leverage all the integration possibilities offered by the Google APIs. We are very excited to see that new features are added very often in the Admin SDK, the Google+ API, the Drive API that will allow AODocs to provide more options to system administrators and improve the experience for our end users.


Thomas Gerber profile

Thomas is the CTO of Altirnao. Before founding Altirnao, Thomas has led a team of senior technologists and architects on High Availability/High Performance implementations of enterprise software.

Editor’s Note: Guest author Niels Buekers is a Google Apps consultant at Capgemini Belgium. — Arun Nagarajan

During a recent Google Apps migration project, we received several requests to create custom groups of contacts so that users could more easily email frequent collaborators. Before switching to Google Apps, users created their own private distribution lists — but this approach led to overlapping groups that quickly fell out of sync.

The problem was a perfect case for Google Apps Script. We built a great solution that gives users as much power as possible with just a quick administrator review.


The situation before: either manually adding each contact or using a private contacts group.


Solution overview

To start the process, a user adds a specific label to a Gmail message. A script that runs on a timed trigger then generates a request to create a group for all the addresses in the message. The script writes this data to a spreadsheet that tracks group names and administrator approval.

/**
 * Retrieves all 'group_request' threads and creates a request.
 */
function processInbox() {
  // Get threads that have the group_request label.
  var groupRequestLabel = GmailApp.getUserLabelByName('group_request');
  var threads = groupRequestLabel.getThreads(0, 10);
  
  // For each thread, retrieve all recipients and create a group request.
  for (var i = 0; i < threads.length; i++) {
    var firstMessage = threads[i].getMessages()[0];
    var sender = firstMessage.getFrom();
    var recipients = [];
    
    // Add sender.
    recipients.push(parseAddresses(sender));
    
    // Add recipients.
    if (threads[i].getMessages()[0].getTo()) {
      var toRecipients = parseAddresses(firstMessage.getTo());
      recipients.push(toRecipients);
    }
    
    // Add CCs.
    if (threads[i].getMessages()[0].getCc()){
      var ccRecipients = parseAddresses(firstMessage.getCc());
      recipients.push(ccRecipients);
    }

    // Write all recipients to a cell in the spreadsheet
    // and send emails to ask for group name and approval.
    createGroupRequestForRecipients(recipients,
        Session.getActiveUser().getEmail());
    
    // Remove label from this thread now that it has been processed.
    threads[i].removeLabel(groupRequestLabel);
  }
};

Handling the request

Once the request has been processed and written to the spreadsheet, the script sends the user an email that asks her to suggest a name for the group in an Apps Script web app. A second email asks the administrator to visit the web app to approve or decline the request. The results are again stored in the spreadsheet.

The spreadsheet contains a second script, which is triggered for each modification. Once the script confirms that the request has been approved, it uses the Apps Script Domain Service to create the new group.

/**
 * Creates a new group in the Google Apps cPanel with the provided name
 * and members.
 */
function createGroupWithAddresses(addresses,groupName){
  var group = GroupsManager.createGroup(groupName, groupName, groupName,
    GroupsManager.PermissionLevel.DOMAIN);
  var splitAddresses = addresses.split(',');
  for (var i = 0; i < splitAddresses.length; i++) {
    Logger.log('Adding ' + splitAddresses[i]);
    group.addMember(splitAddresses[i]);
  }
};

The result after successfully running the script.

This solution provides a simple way for users to request new Google groups, without all the overhead of manually creating an admin-managed distribution list.


Niels Buekers   profile | Twitter

Niels is a Google Apps consultant at Capgemini Belgium, with interest in both the technical track and change management. He recently visited Google’s London office to participate in a Google Apps Script hackathon, which resulted in the above solution. Niels is a strong believer in cloud solutions and loves to spread the word about Google Apps.

Editor’s Note: Guest author Jason Gordon is a co-founder of Beth Macri Designs — Arun Nagarajan

Beth Macri Designs creates jewelry from the point of view of a structural engineer. The forms are designed using generative 3D software systems and materialized using 3D printing technologies. Our company understands that to make beautiful fine jewelry, 3D printing is only the first step; traditional jewelry craft is then employed for final production. After our first product, The Hidden Message Necklace, was recently featured on The View as part of its Valentine's Day Gift Guide, we had a lot of orders to ship out. As soon as the mail leaves the building, though, the process is literally out of our hands: something unexpected was bound to happen to at least one or two packages. Several package-tracking services exist, but getting the names and tracking numbers into them was a cut-and-paste operation.

I knew that all of the tracking numbers were being delivered by email and I had already set up a Gmail filter to archive them and apply a label. With a little help from Google Apps Script, I knew I could automatically parse those emails and add them to my account on PackageTrackr (which syncs to their newer service, Fara).

The script supports reading emails from multiple shipping providers and is set up so one could easily add more. Every 30 minutes on a time-driven trigger, using the Gmail service, the script runs and looks through unread emails from the shipping provider label, then parses the name and tracking number out of each one. The provider, tracking number, and recipient are stored in a JavaScript array.

function getUSPSConversations(){
  return GmailApp.search("in:usps is:unread subject:(Click-N-Ship)");
}

function matchUSPSHTML(data){
  var out = [];
  var track_num = data.match( 
      /TrackConfirmAction\Winput\.action\WtLabels\=(\d+)/g);
  var to = data.match(/Shipped.to.*[\r\n]*.*>([a-zA-Z\s-_]*)<br>/g);
  for(i in track_num){
    var o = new Object();
    var track = track_num[i].match(/(\d+)/g);
    var person = to[i].match(/>([a-zA-Z\s-_]+)<br>/);
    var myPerson = person[1].replace(/(\r\n|\n|\r)/gm,"")
    o["number"]=track[0];
    o["carrier"]="USPS";
    o["person"]=myPerson;
    out.push(o);
  }
  return out;
}

You can parse all of your different shipping providers in one run of the script. After all of the shipment emails are read, it composes an email to PackageTrackr to give it all of the tracking numbers it just harvested.

var user = Session.getActiveUser().getEmail();
if(data.length > 0){    
  for(d in data){
    body += this["formatForPackageTrackr"](data[d]["number"], 
        data[d]["carrier"], data[d]["person"]);
  }
  
  GmailApp.sendEmail("track@packagetrackr.com", "Add Packages",
      body, {bcc: user});
}

function formatForPackageTrackr(tracking_num, service, person){
  return "#:" + tracking_num + " " + service + " " + person + "\n";
}

Down the line, other shipping providers could be added such as UPS and Fedex. Additionally, more tracking services could be added instead of just PackageTrackr.


Jason Gordon   profile

Jason Gordon is a co-founder at jewelry startup Beth Macri Designs. He is responsible for software development, logistics and e-commerce. While working at Beth Macri Designs, Jason gets to find creative ways to put his software development skills to work to improve logistics and user experience.

Editor’s Note: Guest author Martin Hawksey is an advisor at the Jisc Centre for Educational Technology and Interoperability Standards. — Dan Lazin

When I started looking at Google Apps Script in 2010, one of the things that attracted me was the ease with which a non-developer like me could start customising Google Apps with only a few lines of code. Since then, the rich community of users and examples has continued to grow, and I’ve built event booking systems, entire student feedback solutions, and even integrated with Mozilla Open Badges.

Recently, Justin Marckel, the assistant principal at Cornatzer Elementary School in North Carolina, asked for help in modifying one of my existing Apps Script examples. Justin was recording teachers’ classroom activities using a Google Form, then manually copying and pasting data into separate spreadsheets for each teacher to review. Justin wanted to know whether there was a way for a Google Form to store the results in a master spreadsheet, then filter results to each teacher’s spreadsheet.

The basic pseudocode would be:

on form submit
  if teacher’s spreadsheet doesn’t exist, then
    create spreadsheet
    add teacher as viewer
    store id
  else
    get id
    open teacher’s spreadsheet
  copy values to teacher’s spreadsheet

Here’s a closer look at each of the steps.

Handling a form submission event

Apps Script offers three triggers specific to Google Sheets: “on open,” “on edit,” and “on form submit.” Looking at the Understanding Events documentation, we can see that a form submit trigger gives us a few options for how to pull the submitted values out of the event parameter (usually called e). We can get the data as an array via e.values, a Range object via e.range, or a JavaScript object that pairs the form questions with the respondent’s answers via e.namedValues. In this project, the e.values array is most convenient, and it will look something like this:

['2010/03/12 15:00', 'bob@example.com', 'Bob', '27', 'Susan', '25']

First, though, we have to add the form-submission trigger. The user could add it manually from the script editor’s Resources menu, but in this case, let’s manage triggers programmatically:

function setup(){
 if (ScriptApp.getScriptTriggers().length === 0) {
   ScriptApp.newTrigger('doOnFormSumbit')
       .forSpreadsheet(SpreadsheetApp.getActiveSpreadsheet())
       .onFormSubmit()
       .create();
 }
}

Creating and managing permissions on a spreadsheet

One of the big advantages Apps Script is that you’re automatically working in a Google-authenticated environment. The result is that you can programmatically create a new spreadsheet with one line of code, then add a teacher as a viewer in just one more line:

var newSS = SpreadsheetApp.create('Spreadsheet Name');
newSS.addViewer('email-address-of-teacher');

Writing data to a spreadsheet

Writing data to a sheet requires more than a one-liner just because we need to specify which cells to write to. The Range.setValues() method expects a 2D array; because we’ve already retrieved the response to the form as an array, it’s easy to throw those values into a row of cells:

var destSS = SpreadsheetApp.openById(id); // open teacher spreadsheet
var destSheet = destSS.getSheets()[0]; // grab first sheet
var insertRow = destSheet.getLastRow() + 1; // next row to enter data
destSheet.getRange(insertRow, 1, 1, e.values.length)
   .setValues([e.values]);

Simple, effective and efficient

The completed project is here. The bulk of the form-submission handling (including error logging) happens in around 50 lines of code, and I was able to complete the project within an hour. Now Justin no longer needs to copy, paste, and set up separate spreadsheets, potentially saving him hours of work. Justin recently contacted me to say:

“We have successfully used our program over the past couple of months to provide teachers with meaningful and efficient feedback. It has been successful at several other schools as well, and I got word today that our school district is looking at adopting it as a district-wide tool.”

This is just one of a growing number of examples of how Google Apps Script is directly benefitting educators by allowing custom solutions with the security, convenience, and power of Google Apps.


Martin Hawksey   profile | twitter | blog

Martin is an advisor at the Jisc Centre for Educational Technology and Interoperability Standards (CETIS), a national advisory and innovation centre that works on on educational technology and standards for the UK Higher Education and Post-16 Education sectors. Martin is an active contributor to the Apps Script community and regularly shares projects on his blog, MASHe.

Editor’s Note: Guest author Mark Showalter is a Senior Research Scientist at the SETI Institute. — Arun Nagarajan

In 2011 and 2012, while studying the region around Pluto with the Hubble Space Telescope, I discovered the dwarf planet’s fourth and fifth known moons. Like all new astronomical objects, they started out with rather prosaic names — “S/2011 (134340) 1” and “S/2012 (134340) 1”, or, for short, P4 and P5.

I soon found my inbox stuffed with hundreds of naming suggestions. With so much interest, it didn’t seem fair to leave the job to just a handful of scientists. Instead, we decided to let the public propose and vote on the names of Pluto’s moons.

We knew that the web servers at the SETI Institute, my research home, could never handle the bandwidth required for such a task. However, the Institute has built strong relationships with Google through our extensive use of G+, and our friends there were thrilled to let us use Google services for the demanding task. I asked my husband Frank Yellin, who works on the Gmail team, for help in setting up the forms and collecting the data. Google Forms and Google Sheets were obvious choices, but with the volume of contributions and votes we were expecting, we knew we’d need programmatic help checking for duplicate nominees, filtering out inappropriate names, and tallying the votes.

Frank is a longtime Java engineer, so he tried a Java solution first. As the votes started to pour in at the rate of several per second, however, it became clear that the program could barely keep pace. Votes were coming in almost as fast as they were being downloaded and tallied. In a panic, Frank realized it was time to learn Apps Script — in fact, time to learn JavaScript altogether.

With some help from his colleagues (“How do I split a string?” “How do I make a hash table?”), he turned the project around in a few hours. Processing that had taken tens of minutes using Java took mere seconds in Apps Script, since nothing but the results ever had to leave the data center.

We were right to be prepared. By the time we closed the write-in ballot, we had received 30,000 write-in nominees and more than 450,000 votes.

We are now using the results of the poll to support our proposal for the formal names of P4 and P5. That decision is currently in the hands of the International Astronomical Union. When the final decision is made, Pluto and Charon and Nix and Hydra will be joined by two more representatives of the ancient underworld.


Dr. Mark Showalter   profile

Planetary scientist Mark Showalter is a Senior Research Scientist at the SETI Institute. His primary interest in the dynamics of planetary rings. To date, this interest has led him to discover five new moons and three new rings.

Editor’s Note: Guest author Andrew Stillman is a teacher who works at New Visions for Public Schools, a non-profit that provides direct support services to 76 New York City high schools. — Arun Nagarajan

On March 16th, as a green tide tide of college students flowed into Manhattan for a day of rousing revelry, more than 50 young coders from New York-area computer science programs and 30 teachers were drawn instead to Kean University in New Jersey by the gravity of St. Hacktrick’s Day, our first Apps Script for EDU Codeathon. Inspired by the viral popularity of the Flubaroo, Doctopus, and autoCrat scripts for teachers, St. Hacktrick’s Day aimed to pair coders with educators to produce more free, smart tools for education.


Teacher Daniel Scibienski works as an elementary ESL teacher in NJ. He helped organize and emcee the event, and was on the winning team that built a picture-prompt generator for Google Docs.

Most of the student scripters were on their first day of spring break, making our huge turnout for this event all the more remarkable. Product designers — all working educators who took time out on a Saturday — traveled from as far north as Ulster County, NY and as far south as Virginia, while we had others who joined teams via G+ Hangouts from Singapore, Montreal, Vancouver, and London.


This team built a class-roster Google Site replicator using Apps Script, cookies, and Coke. Their EDU design partner was located in the UK!

Unlike a typical hackathon, teams weren’t simply building their own ideas — instead, to ensure their scripts would be truly useful in the classroom, we solicited project proposals through a Google Moderator board. By the day of the event, we had 48 ideas with 187 votes from educators around the world.

In all, 17 teams built demo-ready prototypes in less than 6 hours of coding. The Apps Script team rounded up a few Nexus 7 tablets for the winners below and invited them to present their projects to the Google Docs engineering team:


Popular vote: Picture Prompt Generator
Summary: Inserts kid-friendly pictures from Google Image Search into student documents. Elementary students then write stories based on the visual prompts.
Design: Daniel Scibienski
Code: Ashish Nandwani and Krutika Shah


Judges' choice: Plagiarism Detector
Summary: Uses a similarity algorithm to rank Google Documents by originality.
Design and code: Alice Lin, Basim Baig, and Jackie Wei (Stony Brook University)


Judges' choice: Unpivot Google Form Data
Summary: Removes duplicates from Google Form data and transforms it for use in a pivot table.
Design: Ron Turchyniak
Code: Andrew Ireland, Sangwook Lee, and Steve Byung Park (Stony Brook University)


Teams have been asked to open-source their code and donate it to New Visions for Public Schools, the support organization I work for, and to consider improving their projects for use by educators everywhere. We’ll keep you posted as these resources become available.

Big thanks to our participants, to organizers Meredith Martin, Dave Zirkle, Daniel Scibienski, Emily Graves, Diana Potts, Lisa Thumann, Andrew Carle, and to Google’s Arun Nagarajan, Saurabh Gupta, and Zach Yeskel.


Andrew Stillman   profile

Andrew Stillman is a career STEM educator who works as Program Officer for Digital Instruction at New Visions for Public Schools, a non-profit that provides direct support services to 76 New York City high schools. Andrew founded YouPD.org and has written a number of popular Apps Scripts for schools designed to improve efficacy through better workflows, communications, and data management in Google Apps.

Editor’s Note: Guest author John Gale is a Solutions Developer at Appogee, a Google Cloud Service Partner. — Arun Nagarajan

Ever since we launched Appogee Leave — the first tool in the Google Apps Marketplace for tracking employees’ absences and time off — customers have been asking, “Can you support my native language?”

Our partners offered to help crowdsource the translation, but it was a challenge to know where to begin. We started by identifying a few needs:

  • Users must be able to provide new translations
  • Users must be able to flag bad translations and recommend changes
  • Developers must be able to integrate completed translations
  • Customer service must be able to keep users informed of progress

With just a couple days’ effort in Google Apps Script, we created a complete application for crowd-sourced localization that handles each of those requirements. You can get a glimpse of the system in the screenshot below.

Source: Appogee

Let’s take a look at a few specific Apps Script tricks we used to make the whole thing work.


Avoid Collisions with Lock Service

Like many Apps Script users, we store almost all of the data for our translation system in Google Sheets, including both the list of English terms we want to translate and users’ translations.

During testing, we found that if two users submitted translations at the same time, the spreadsheet wrote both sets of changes to the same place, causing us to lose one user’s updates. To solve this, we use Apps Script’s semaphore-based Lock Service. In the code below, a public lock ensures that a user has temporary exclusive use of the spreadsheet so that their correction is added even if another user also submits a correction.

function submit(e){
  /*  get the fields from the UI callback  */
  var incorrect = e.parameter.foreignWordIncorrectTxt;
  var correct = e.parameter.foreignWordCorrectTxt;
  var reason = e.parameter.reasonTxt;
  var lang = e.parameter.hiddenLang;

  /*  validate the input; return the user a message if invalid  */
 
  /*  open the spreadsheet  */
  var active_user_email = UserProperties.getProperty('user_email') || "";
  var master_spreadsheet = SpreadsheetApp.openById(MASTER_SPREADSHEET_KEY);
  var correction_sheet = master_spreadsheet.getSheetByName('Corrections');
    
  /*  get a lock and update the spreadsheet  */
  var lock = LockService.getPublicLock();
  lock.waitLock(30000);    
  correction_sheet.appendRow([
      lang, incorrect, correct, reason, active_user_email
    ]);
  SpreadsheetApp.flush();
  lock.releaseLock();

  /*  reset the UI  */
  return reset();
}

You’ll note that this code opens the spreadsheet before obtaining a lock. At this point, we are only reading, not writing, and thus do not yet require a lock. We then tell Apps Script we are prepared to wait up to 30 seconds for our turn to lock the worksheet. On the rare occasion that a lock is not available within 30 seconds (usually because somebody else has an exclusive lock), the code throws an exception and stops execution.

Once we have acquired the lock, we quickly write the correction to the spreadsheet — including a call to SpreadsheetApp.flush() to ensure the data is written immediately — and release the lock.


Save Time with Cache Service

Because the translations are stored in a spreadsheet along with information about who provided them, it’s easy to recognize our top contributors through a leaderboard. The leaderboard data is a good candidate for caching because it’s shown to a large number of people, but only changes when we receive new updates from top-ranking users.

Like the Lock Service described earlier, the Cache Service provides both public and private variants. The public cache is useful for storing data that should be available to all users, such as the leaderboard. The private cache is more appropriate for storing information about a user, such as the translations they have submitted so far.

Since the Apps Script cache can only store strings, complex objects must first be converted. Lucky for us, Apps Script provides JSON utilities that make this conversion easy, as shown in this example:

function getBoardData(){
  var cache = CacheService.getPublicCache();
  var leaderboard_data = cache.get('leaderboard_data');
  if (leaderboard_data == null) {
    leaderboard_data = getTopTen();
    cache.put('leaderboard_data',
              Utilities.jsonStringify(leaderboard_data),
              3600);
  } else {
    leaderboard_data = Utilities.jsonParse(leaderboard_data);
  }
  return leaderboard_data;
}

Our hope is that the leaderboard will encourage users to provide more translations by introducing some friendly competition.

Thanks to Google Apps Script and the techniques shown above, we built a powerful crowdsourcing translation system without unnecessary complexity or development effort. If you’d like to help translate Appogee Leave, we’d love to have your contribution.

Source: Appogee


John Gale   profile | twitter

John Gale is a Solutions Developer at Appogee, a Google Cloud Service Partner. John develops software that helps customers make the most of the Google Cloud platform. He has worked on a range of cloud-based applications including Google Apps Marketplace applications, client projects involving Lotus Notes application migration, and mobile data-capture applications.

Editor’s Note: Guest author Ronald Dahrs runs Forscale, an IT and project management company based in the Netherlands. -- Arun Nagarajan

Google Apps is well-suited for project management because it’s a cloud-based productivity suite that helps you and your team connect and get work done from anywhere on any device. Using Google Apps Script, we can push the capabilities even further to create advanced scheduling and management tools. A common tool in project management circles is the Gantt chart: a schedule of the tasks in the project and how they relate to each other over time.

The spreadsheet that generated that Gantt chart is available in the template gallery today. In this post, we’ll explore the basics of how the template works and explain a few of the Apps Script techniques that transform Google Sheets into such a powerful project management tool.

When you open the template, you’ll see stubs for each type of task, but the screenshot above shows an example of a slightly larger project plan — in fact, the same data used to generate the Gantt chart below.

The template’s sophisticated formulas rely on the structure of the table to enable schedule awareness and task dependencies. However, we still ensure that the user can rename, rearrange, or add columns by using a hidden header to identify each column. This diagram demonstrates the spreadsheet’s structure:

In Apps Script, we use the spreadsheet’s onEdit() event to monitor user interaction with the schedule portion of the spreadsheet and update the Gantt chart accordingly. The powerful JavaScript language does all the required summary calculations based on the provided dates and completion percentages.

We have also used Apps Script’s addMenu() method to build a custom menu that calls row-oriented functions like indenting tasks to get a so-called Work Breakdown Structure with summary tasks. If you just want to see an overview, the custom menu allows you to collapse tasks, which we accomplished through the hideRows() method.

For changes that do not trigger an onEdit() event (for example, clearing a row), the user can use the menu’s Refresh command to recalculate the schedule.

The template stores user preferences as Script Properties and offers an interactive user interface built in UiApp to change those settings:

Finally, to render the Gantt chart, we use cell background colors to visually group and highlight the appropriate cells. This creates the effect of a continuous calendar with clearly visible start and finish dates for each task.

var ganttColors = ganttRange.getBackgroundColors();
var ganttValues = ganttRange.getValues();

// update Gantt colors and values 
ganttRange.setBackgroundColors(ganttColors).setValues(ganttValues);

Ronald Dahrs

Ronald combines his knowledge of project management and software solutions at his company, Forscale. He believes Google Apps is an excellent platform for online project management. He uses Google Apps Script to integrate the services to manage a wide range of projects.

Editor’s Note: Guest author David Fothergill works at QueryClick, a search-engine marketing company based in the UK. — Eric Koleda

Working in Paid Search account management, I've often found tremendous wins from making reports more useful and efficient. Refining your analytics allows you to streamline your workflow, allowing more time for strategic and proactive thinking — and that's what we're paid for, not endless number-crunching.

The integration between Google Analytics and Apps Script has opened up lots of opportunities for me to make life easier through automation. In a recent blog post on my agency's website, I outlined how an automated report can quickly “heatmap” conversion rate by time and day. The aim of the report is to provide actionable analysis to inform decisions on day-part bidding and budget strategies.

In that post, I introduce the concepts and provide the scripts, sheet, and instructions to allow anyone to generate the reports by hooking the scripts up to their own account. Once the initial sheet has been created, the script only requires the user to provide a Google Analytics profile number and a goal for which they want to generate heatmaps. In this post, we’ll break down the code a bit.


Querying the API

This is a slight amendment to the code that queries the Core Reporting API. Apart from customising the optArgs dimensions to use day and hour stats, I have modified it to use goal data from the active spreadsheet, because not all users will want to measure the same goals:

function getReportDataForProfile(ProfileId, goalNumber) {
  //take goal chosen on spreadsheet and select correct metric
  var tableId = 'ga:' + ProfileId;
  if (goalNumber === 'eCommerce Trans.') {
    var goalId = 'ga:Transactions' ;
  } else {
    var goalId = 'ga:goal' + goalNumber + 'Completions';
  }
  // Continue as per example in google documentation ...
}

Pivoting the Data

Once we’ve brought the Google Analytics data into the spreadsheet in raw form, we use a pivot table to plot the hour of the day against the day of the week.

For this type of report, I'd like to use conditional formatting to heatmap the data — but conditional formatting in Google Sheets is based on fixed values, whereas we want the thresholds to change based on cell values. However, thanks to the flexibility of scripts, I was able to achieve dynamic conditional formatting.


Conditional Formatting Using Scripts

The script needs to know the boundaries of our data, so I’ve set up several cells that display the maximums, minimums, and so forth. Once these were in place, the next step was to create a function that loops through the data and calculates the desired background color for each cell:

function formatting() {
  var sheet = SpreadsheetApp.getActiveSpreadsheet().
      getSheetByName('Heatmap');
  
  var range = sheet.getRange('B2:H25');
  range.setBackgroundColor('white');
  var values = range.getValues()
      
  //get boundaries values for conditional formatting
  var boundaries = sheet.getRange('B30:B35').getValues();

  //get range to 'heatmap'
  var backgroundColours = range.getBackgroundColors();

  for (var i = 0; i < values.length; i++) {
    for (var j = 0; j < values[i].length; j++) {
      // Over 90%
      if (values[i][j] > boundaries[1][0]) {
        backgroundColours[i][j] = '#f8696b';
      }
      // Between 80% and 90%
      if (values[i][j] < boundaries[1][0] 
          && values[i][j] >= boundaries[2][0])  {
        backgroundColours[i][j] = '#fa9a9c';
      }
      // Between 60% and 80%
      if (values[i][j] < boundaries[2][0] 
          && values[i][j] >= boundaries[3][0])  {
        backgroundColours[i][j] = '#fbbec1';
      }
      // Between 40% and 60%
      if (values[i][j] < boundaries[3][0] 
          && values[i][j] >= boundaries[4][0])  {
        backgroundColours[i][j] = '#fcdde0';
      }
      // Between 20% and 40%
      if (values[i][j] < boundaries[4][0] 
          && values[i][j] >= boundaries[5][0])  {
        backgroundColours[i][j] = '#ebf0f9';
      }
      // Less than 20%
      if (values[i][j] < boundaries[5][0])  {
        backgroundColours[i][j] = '#dce5f3';
      }
    }
  }
  // set background colors as arranged above
  range.setBackgroundColors(backgroundColours);
}

Calling the functions based on the profile ID and goal number specified in the main sheet gives us a quick, actionable report that can easily be adapted for use across multiple accounts.

function generateHeatmap() {
  try {
    var profileId = SpreadsheetApp.getActiveSpreadsheet()
        .getSheetByName('Heatmap').getRange(4,10).getValue();
    var goalNumber = SpreadsheetApp.getActiveSpreadsheet()
        .getSheetByName('Heatmap').getRange(7,10).getValue();
    if (profileId === '') {
      Browser.msgBox('Please enter a valid Profile ID');
    } else {     
      var results = getReportDataForProfile(profileId, goalNumber);
      outputToSpreadsheet(results);
      formatting();
    }
  } catch(error) {
    Browser.msgBox(error.message);
  }
}

This was my first foray into the slick integration between the Core Reporting API and spreadsheets, but has proven a valuable test case for how effective it will be to roll this method of reporting into our daily process of managing accounts.

We have now started the next steps, which involves building out “client dashboards” that will allow account managers access to useful reports at the press of a button. This moves us toward the goal of minimizing the time gathering and collating data, freeing it up to add further value to client projects.

Editor's Note: If you're interested in further scripting your AdWords accounts, take a look at AdWords Scripts, a version of Apps Script that's embedded right into the AdWords interface.


David Fothergill   profile | twitter | LinkedIn

Guest author David Fothergill is a Project Director at the search marketing agency QueryClick, focusing on Paid Search and Conversion Optimisation. He has been working in the field for around 8 years and currently handles a range of clients for the company, in verticals ranging from fashion and retail through to industrial services.

Editor’s Note: Guest author Ashraf Chohan works at the Government Digital Service (GDS), part of the UK Cabinet Office. -- Arun Nagarajan

Recently, when we were preparing the launch of GOV.UK, my team was tasked with creating a series of high-level metrics reports which could be quickly compiled and presented to managers without technical or analytical backgrounds. These reports would be sent daily to ministers and senior civil servants of several government departments, with the data customised for each department.

We decided to use Adobe InDesign to manage the visual appearance of the reports. InDesign’s data-merge functionality, which can automatically import external data into the layout, made it easy to create custom departmental reports. The challenge was to automate the data collection using the Google Analytics API, then organize the data in an appropriate format for InDesign’s importer.

In a previous post on this blog, Nick Mihailovski introduced a tool which allows automation of Google Analytics Reporting using Google Apps Script. This seemed an ideal solution because the team only had basic developer knowledge, much of the data we needed was not accessible from the Google Analytics UI, and some of the data required specific formatting prior to being exported.

We started by building the core reports in a Google spreadsheet that pulls in all of the required raw data. Because we wanted to create daily reports, the start and end dates for our queries referenced a cell which defaulted to yesterday’s date [=(TODAY())-1].

These queries were dynamically fed into the Google Analytics API through Apps Script:

// All variables read from each of the “query” cells  
var optArgs = {
    'dimensions': dimensions,              
    'sort': sort
    'segment': segment
    'filters': filters,         
    'start-index': '1',
    'max-results': '250'                    
  };

  // Make a request to the API.
  var results = Analytics.Data.Ga.get(
      tableId,                  // Table id (format ga:xxxxxx).
      startDate,               // Start-date (format yyyy-MM-dd).
      endDate,                 // End-date (format yyyy-MM-dd).
      endDate,                 // Comma seperated list of metrics.
      optArgs);

Next, we created additional worksheets that referenced the raw data so that we could apply the first stage of formatting. This is where storing the data in a spreadsheet really helps, as data formatting is not really possible in the Google Analytics UI.

For example, the final report had a 47-character limit for page titles, so we restricted the cells in the spreadsheet to 44 characters and automatically truncated long URLs by appending “...”.

Once the initial formatting was complete, we used formulas to copy the data into a summary sheet specially laid out so it could be exported as a CSV file that merges seamlessly into InDesign.

Below is an example of how a report looks on publication. Nearly everything on the page was extracted from the API tool, including the department name and the day number. Because most of the data was automated, it required minimal effort on our part to assemble these reports each morning.

We discovered that an added bonus of pulling data into a Google spreadsheet was that it also allowed us to publish the data to a Google site. This helped us display data to stakeholders without adding lots of users to our Google Analytics account.

The tools let us present Google Analytics data in deeper, more creative ways. That’s really important as we share information with more and more non-technical people, whether they’re inside GDS or beyond.


Ashraf Chohan

Guest author Ashraf Chohan works at the Government Digital Service (GDS), part of the UK Cabinet Office. GDS’s role is to deliver digital transformation of government services. Ashraf is a Product Analyst for GOV.UK, a new site for public services and information.