[go: nahoru, domu]

Gmail servers support the standard IMAP and POP protocols for email retrieval but sometimes you only need to know whether there are any new messages in your inbox. Using any of the two protocols mentioned above may seem like an overkill in this scenario and that’s why Gmail also exposes a read only feed called Gmail Inbox Feed which you can subscribe to and get the list of unread messages in your inbox.

The Gmail Inbox Feed is easily accessible by pointing your browser to https://mail.google.com/mail/feed/atom and authenticating with your username and password if you are not already logged in.


Using basic authentication to access the inbox feed doesn’t provide a very good user experience if we want delegated access. In that case, we should instead rely on the OAuth authorization standard, which is fully supported by the Gmail Inbox Feed.

OAuth supports two different flows. With 3-legged OAuth, an user can give access to a resource he owns to a third-party application without sharing his credentials. The 2-legged flow, instead, resembles a client-server scenario where the application is granted domain-wide access to a resource.

Let’s write a .NET application that uses 2-legged OAuth to access the Gmail Inbox Feed for a user in the domain and list unread emails. This authorization mechanism also suits Google Apps Marketplace developers who want to add inbox notifications to their applications.

There is no dedicated client library for this task and the Inbox Feed is not based on the Google Data API protocol but we’ll still use the .NET library for Google Data APIs for its OAuth implementation.

Step-by-Step

First, create a new C# project and add a reference to the Google.GData.Client.dll released with the client library. Then add the following using directives to your code:

using System;
using System.Linq;
using System.Net;
using System.Net.Mail;
using System.Xml;
using System.Xml.Linq;
using Google.GData.Client;

The next step is to use 2-legged OAuth to send an authorized GET request to the feed URL. In order to do this, we need our domain’s consumer key and secret and the username of the user account we want to access the inbox feed for.

string CONSUMER_KEY = "mydomain.com";
string CONSUMER_SECRET = "my_consumer_secret";
string TARGET_USER = "test_user";

OAuth2LeggedAuthenticator auth = new OAuth2LeggedAuthenticator("GmailFeedReader", CONSUMER_KEY, CONSUMER_SECRET, TARGET_USER, CONSUMER_KEY);
HttpWebRequest request = auth.CreateHttpWebRequest("GET", new Uri("https://mail.google.com/mail/feed/atom/"));
HttpWebResponse response = request.GetResponse() as HttpWebResponse;

The response is going to be a standard Atom 0.3 feed, i.e. an xml document that we can load into an XDocument using the standard XmlReader class:

XmlReader reader = XmlReader.Create(response.GetResponseStream());
XDocument xdoc = XDocument.Load(reader);
XNamespace xmlns = "http://purl.org/atom/ns#";

All the parsing can be done with a single LINQ to XML instruction, which iterates the entries and instantiates a new MailMessage object for each email, setting its Subject, Body and From properties with the corresponding values in the feed:

var messages = from entry in xdoc.Descendants(xmlns + "entry")
               from author in entry.Descendants(xmlns + "author")
               select new MailMessage() {
                   Subject = entry.Element(xmlns + "title").Value,
                   Body = entry.Element(xmlns + "summary").Value,
                   From = new MailAddress(
                       author.Element(xmlns + "email").Value,
                       author.Element(xmlns + "name").Value)
               };

At this point, messages will contain a collection of MailMessage instances that we can process or simply dump to the console as in the following snippet:

Console.WriteLine("Number of messages: " + messages.Count());
foreach (MailMessage entry in messages) {
    Console.WriteLine();
    Console.WriteLine("Subject: " + entry.Subject);
    Console.WriteLine("Summary: " + entry.Body);
    Console.WriteLine("Author: " + entry.From);
}

If you have any questions about how to use the Google Data APIs .NET Client Library to access the Gmail Inbox Feed, please post them in the client library discussion group.

Claudio Cherubino   profile | twitter | blog

Claudio is a Developer Programs Engineer working on Google Apps APIs and the Google Apps Marketplace. Prior to Google, he worked as software developer, technology evangelist, community manager, consultant, technical translator and has contributed to many open-source projects, including MySQL, PHP, Wordpress, Songbird and Project Voldemort.

ACL (Access Control List) entries control who can access Google Docs resources. This allows more specific control over resource privacy or permissions.

Many types of applications need to grant document access for several users at once. As an example: when a new user is added to a project in the Manymoon project management application, every user on the project needs to be granted access to all attached Google docs. If there are 10 users on the project and 10 shared documents, this means the app would typically need to perform 100 HTTP requests -- a lot of overhead. With batching of ACL requests, the application can reduce the number of requests to one per document, resulting in a 10x savings.

Before Batching

A typical ACL entry for a single user is created by making an HTTP POST to the ACL link provided with each resource entry. The POST body looks something like this:

<entry xmlns="http://www.w3.org/2005/Atom"
       xmlns:gAcl='http://schemas.google.com/acl/2007'>
    <category scheme='http://schemas.google.com/g/2005#kind'
              term='http://schemas.google.com/acl/2007#accessRule'/>
    <gAcl:role value='writer'/>
    <gAcl:scope type='user' value='new_writer@example.com'/>
</entry>

To achieve the same thing using the Python client library, use the following code:

from gdata.acl.data import AclScope, AclRole
from gdata.docs.data import AclEntry

acl = AclEntry(
    scope = AclScope(value='user@example.com', type='user'),
    role = AclRole(value='writer')
)

With Batching

Instead of submitting the requests separately, multiple ACL operations for a resource can be combined into a single batch request. This is done by POSTing a feed of ACL entries. Each ACL entry in the feed must have a special batch:operation element, describing the type of operation to perform on the ACL entry. Valid operations are query, insert, update, and delete.

<feed xmlns="http://www.w3.org/2005/Atom"
      xmlns:gAcl='http://schemas.google.com/acl/2007'
      xmlns:batch='http://schemas.google.com/gdata/batch'>
  <category scheme='http://schemas.google.com/g/2005#kind'
            term='http://schemas.google.com/acl/2007#accessRule'/>
    <entry>
        <category scheme='http://schemas.google.com/g/2005#kind'
                  term='http://schemas.google.com/acl/2007#accessRule'/>
        <gAcl:role value='reader'/>
        <gAcl:scope type='domain' value='example.com'/>
        <batch:operation type='insert'/>
    </entry>
    <entry>
        <category scheme='http://schemas.google.com/g/2005#kind'
                  term='http://schemas.google.com/acl/2007#accessRule'/>         
        <id>https://docs.google.com/feeds/default/private/full/document%3Adocument_id/acl/user%3Aold_writer%40example.com</id>
        <gAcl:role value='writer'/>
        <gAcl:scope type='user' value='new_writer@example.com'/>
        <batch:operation type='update'/>
    </entry>
</feed>

The following code represents the same operation in the Python client library:

from gdata.data import BatchOperation
from gdata.acl.data import AclScope, AclRole
from gdata.docs.data import AclEntry

acl1 = AclEntry(
    scope=AclScope(value='example.com', type='domain'),
    role=AclRole(value='reader'),
    batch_operation=BatchOperation(type='insert')
)

acl2 = client.get_acl_entry_by_self_link(
    ('https://docs.google.com/feeds/default/private/full/'
     'document%3Adocument_id/acl/user%3Aold_writer%40example.com'))
acl2.scope = AclScope(value='new_writer@example.com', type='user')
acl2.role = AclRole(value='writer')
acl2.batch_operation = BatchOperation(type='update')

entries = [acl1, acl2]

The feed of these entries can now be submitted together to apply to a resource:

results = client.batch_process_acl_entries(resource, entries)

The return value is an AclFeed, with a list of AclEntry elements for each operation, the status of which can be checked individually:

for result in results.entry:
    print entry.title.text, entry.batch_status.code

The examples shown here are using the raw protocol or the Python client library. The Java client library also supports batch operations on ACL entries.

For more information on how to use batch operations when managing ACLs, see the Google Documents List API documentation, and the Google Data APIs batch protocol reference guide. You can also find assistance in the Google Documents List API forum.


Ali Afshar profile | twitter

Ali is a Developer Programs engineer at Google, working on Google Docs and the Shopping APIs which help shopping-based applications upload and search shopping content. As an eternal open source advocate, he contributes to a number of open source applications, and is the author of the PIDA Python IDE. Once an intensive care physician, he has a special interest in all aspects of technology for healthcare.

Like any Google service, we’re always working to refine the Google Apps Marketplace for our vendors and their customers. Normally we make small, incremental improvements and let the better experience speak for itself, but this week we think several of the new features are noteworthy enough to point out.

View Enhanced Analytics

A frequently requested feature has been improved analytics. If you have Google Analytics configured for your listing, your analytics profile will now receive the search terms and category selected by your customers.

When a customer searches on the Marketplace, the search results will all have two parameters attached: query and category. You’ll want to add these terms to your Google Analytics Website Profile. The query term will include all of the search terms the user entered into the search box, or it could be blank if the customer found your application through browsing. Similarly, the category parameter will be blank unless the customer narrowed his search or browse by picking the category you chose in your application listing, e.g. Accounting & Finance. Now you’ll have much better data about how a customer has reached your application, whether through browsing, searching, or a combination.

Count Installs

Another developer-focused enhancement we’ve made is adding a count of installs and uninstalls on each application’s listing when signed into the vendor account:

"Net install count" represents the number of current installs -- any uninstalls have already been deducted. If you add "Net install count" to "Uninstall count" you will have the total number of installations for the app since it launched. You are also able to retrieve this information from the Licensing API, but it is nice to have it easily accessible on your listing pages.

Sweep Away Those Dusty Apps

On the Vendor Profile page you’ll also that we’ve added the ability to hide unused applications (and show them again) so that you can manage your applications better and remove clutter on your dashboard. You still cannot delete applications, but you can sweep them under the rug! Note that Hiding and Unhiding are just for managing your list as a Vendor -- they do not change your Publish/Unpublish settings for each app. You can hide a published app as easily as an unpublished app.

Browse Staff Picks on the Home Page

Since announcing the Staff Picks program in May, we’ve featured a number of especially well-integrated and innovative applications on the @GoogleAtWork twitter stream and here on the Google Apps Developer Blog. Now we also feature Staff Picks on the front page of the Marketplace.

You can find four Staff Picks on the Marketplace landing page, chosen from the pool of apps selected as staff picks. See the Staff Picks page on code.google.com for more information on how we choose these apps.

Andy "Rufus" Rothfusz    blog

Rufus is a Developer Programs Engineer working on Google Apps APIs and the Google Apps Marketplace. He has over 14 years of experience in developer programs covering a wide range of applications including 3D graphics acceleration, natural language processing, device security, video games and video streaming.

Editors note: This is a guest post by Alex Steshenko. Alex is a core software engineer for Solve360, a highly-rated CRM in the Google Apps Marketplace which was recently chosen as a Staff Pick. Solve360 offers many points of useful integration with Google Apps. Today, in contrast to the conventional Data API integrations, Alex will showcase how he extended Solve360 using Google Apps Script. --- Ryan Boyd

Choosing Google Apps Script

Solve360 CRM integrates with Google services to provide a two-way contact & calendar sync, email sync and a comprehensive Gmail contextual gadget. We use the standard Google Data APIs. However, some of our use cases required us to use Google documents and spreadsheets. Enter Apps Script!. What brought our attention to Google Apps Script was that it allows you to run your application code right within the Google Apps platform itself, where documents can be manipulated using a wide range of native Google Apps Script functions, changing the perspective.

Our first experience with Google Apps Script was writing a "Contact Us" form. We decided to use the power and flexibility of Apps Script again to generate different kinds of reports.

Generating Solve360 Reports using Apps Script

Google Spreadsheets can produce rich reports leveraging features such as filters, pivot tables, built-in functions and charts. But where’s the data to report on? Using Google Apps Script, users can integrate Google Spreadsheets with a valuable source of data - the Solve360 CRM - completing the solution.

Solve360 Google Apps Reporting script lets users configure the reporting criteria while pulling reports into a Google Spreadsheet.

Here's a video demonstrating a real use case for Solve360 Reporting:

Designing Solve360 Reporting using Apps Script

User meet “Script”

For this script, we realized, simply providing spreadsheet functions would not be good enough. We needed a user interface to let users configure their account details and define what kind of data to fetch from the Solve360 CRM. Google Apps Script’s Ui Services came in handy. For instance, here is the function responsible for showing the “Solve360 account info” dialog:

/*
 * Creates new UI application and opens setting window
 */
function settingsUi() {
  var app = UiApp.createApplication();
  app.setTitle('Solve360 account info')
     .setWidth(260)
     .setHeight(205);

  var absolutePanel = app.createAbsolutePanel();

  absolutePanel.add(authenticationPanel_(app));

  app.add(absolutePanel);
  SpreadsheetApp.getActiveSpreadsheet().show(app);
}

Working with Solve360’s API

Solve360 CRM has an external API available so the system can be integrated with custom business applications and processes. Reporting script use case is a good example of what it can be used for.

One of the first tricks learned was creating our own Google Apps-like “service” to encapsulate all those functions responsible for interacting with Solve360 CRM’s API. What is the most interesting is that this service’s code isn’t a part of the distributed Google Apps script. Instead the library is loaded from within the script itself directly from our servers. Why? Let’s say we found a bug or added new functions - if we had many copies of the service we would need to update them all, somehow notifying our clients, and so on. With one source, there’s no such problem. You may think of it as a way to distribute a Google Apps Script solution, or, in our case, a part of the solution. The service is called Solve360Service and its usage looks like this:

var contact = Solve360Service.addContact(contactData);

There were two problems with getting such an external service to work: Google Apps Script permissions restrictions and actually including it in the script.

The issue with permissions is that the Google Apps Script environment can’t see which Google Apps Script services are used inside the external service - that’s why it doesn’t ask you to grant special permissions for them. To force the authorization request for those permissions we added this to the onInstall function (called once when script is added to the spreadsheet):

function onInstall() {
   // to get parseJS permissions
  Xml.parseJS(['solve360', '1']);

   // to get base64Encode permissions
  Utilities.base64Encode('solve360');
// ...
}

Here is the solution we used to load our centralized code into the script:

eval(UrlFetchApp.fetch("https://secure.solve360.com/gadgets/resources/js/Solve360Service.js").getContentText());

The Solve360Service is loaded from a single source - no copy-paste. All the functions for accessing the Solve360 API aka “the plumbing” are abstracted and hidden in inside this service, while the essentials of the reports script itself can be modified and tweaked to a particular client’s case. Inside of Solve360Service we use UrlFetchApp:

/**
 * Request to the Solve360 API server
 * data should be an Array in Short Hand notation
 */
request : function(uri, restVerb, data) {
  if (this._credentials == null) {
    throw new Error('Solve360 credentials are not set');
  }

  if (typeof(data) != 'undefined') {
    if (restVerb.toLowerCase() == 'get') {
      var parameters = [];
      for each(var parameter in data) {
        parameters.push(encodeURIComponent(parameter[0]) + '=' + encodeURIComponent(parameter[1]));
      }
      uri += '?' +  parameters.join('&');
      data = '';
    } else {
      data.unshift('request');
      data = Xml.parseJS(data).toXmlString();
    }
  } else {
    data = '';
  }

  var options = {
    "contentType" : "application/xml",
    "method" : restVerb.toLowerCase(),
    "payload" : data,
    "headers" : {"Authorization" : "Basic " + this._credentials}
  };
  return Xml.parse(UrlFetchApp.fetch(this._url + uri, options).getContentText()).getElement();
}

As the result is always XML, in order to remove any extra work we call Xml.parse() right inside the request function and always return a XmlElement so you can iterate through it, access nodes and attributes. Here is a simplified version of how we load some items when building a report:

/*
* Builds a search config from user preferences and loads a slice of data from Solve360
* To configure how many items should be loaded at a time, change ITEMS_LOAD_REQUEST_LIMIT constant
*/
function retrieveItems_(parameter, offset) {
  initSolve360Service_();
  // ...
  var searchParameters = [
    ['layout', '1'],
    ['sortdir', 'ASC'],
    ['sortfield', 'name'],
    ['start', '' + offset],
    ['limit', '' + ITEMS_LOAD_REQUEST_LIMIT],
    ['filtermode', filtermode],
    ['filtervalue', filtervalue],
    ['searchmode', searchmode],
    ['searchvalue', searchvalue],
    ['special', special],
    ['categories', '1']
  ];
  if (parameter.showAllFieldsCheckbox != 'true' && fields.length > 0) {
    searchParameters.push(['fieldslist', fields.join(',')]);
  }
  // ...
  var items = Solve360Service.searchProjectBlogs(searchParameters);
  // ...
  return items;
}

To simplify the usage of the service we added another function which initializes the service object, named Solve360Service:

/*
* Loads external Solve360Service
* For the service functions available refer to the source code here:
* https://secure.solve360.com/gadgets/resources/js/Solve360Service.js
*/
var Solve360Service = null;
function initSolve360Service_() {
  if (Solve360Service == null) {
    eval(UrlFetchApp.fetch("https://secure.solve360.com/gadgets/resources/js/Solve360Service.js").getContentText());
    var user = UserProperties.getProperty(USERPROPERTY_USER);
    var token = UserProperties.getProperty(USERPROPERTY_TOKEN);
    if (user == null || user.length == 0 || token == null || token.length == 0) {
      throw new Error('Use Solve360 spreadsheet menu to set email and token first');
    }
    Solve360Service.setCredentials(user, token);
  }
}

As you can see, it uses the email/token pair previously saved in the “Solve360 Account Info” dialog or signals an error if the credentials were not yet saved.

Conclusion

There are many use cases where you can apply the Google Apps Script. The fact that you can work and implement solutions right from “inside” one of the greatest and most universal web applications available is amazing.

You can integrate your own software with Google Docs or even learn from us and build a reporting script for any other system accessible online. Try to look at solving business tasks from a different perspective, from the Google Apps point of view. We encourage it!

The code of the new script is available for use and study here:
https://secure.solve360.com/docs/google-apps-reports.js.

Alex Steshenko profile | twitter | blog

Alex Steshenko is a core software engineer for Solve360, a CRM application on the Google Apps Marketplace.

We have released version 1.9 of the .NET Library for Google Data APIs and it is available for download.

This version adds the following new features:

  • support for 3-legged OAuth
  • two new sample applications: BookThemAll (mashup of Calendar and Provisioning APIs) and UnshareProfiles (showcasing a new feature of the Google Apps Profiles API)
  • updates to the Content for Shopping API to implement 20+ new item attributes
  • support for new yt:rating system and Access Control settings to the YouTube API

This new version also removes the client library for the deprecated Google Base API and fixes 20 bugs.

For more details, please check the Release Notes and remember to file feature requests or bugs in the project issue tracker.

Claudio Cherubino profile | twitter | blog

Claudio is a Developer Programs Engineer working on Google Apps APIs and the Google Apps Marketplace. Prior to Google, he worked as software developer, technology evangelist, community manager, consultant, technical translator and has contributed to many open-source projects, including MySQL, PHP, Wordpress, Songbird and Project Voldemort.

Charts are a great way to communicate significant amounts of data. We’ve joined forces with the Google Chart Tools team in order to bring a new Charts API to Apps Script. Every day, millions of charts are created, updated, put into presentations, emailed to managers, and published as web pages. Our goal is to automate chart creation in Google Apps and make the sometimes-tedious tasks of chart creation and updating a little more fun!

Charts Ahoy!

Our initial launch includes six chart types. These can be attached to sites, sent as email attachments, or displayed using an Apps Script UiApp.


4 Easy Steps to Create Charts

Step 1 - Open Apps Script Editor

You can access the Apps Script Editor from a Spreadsheet or a Google Sites Page. To access the Apps Script Editor from a Spreadsheet, Open a New Spreadsheet > Tools > Script Editor. To open the Script Editor from a Google Sites, More > Manage Site > Apps Scripts > Add New Script.

Step 2 - Create a Data Table

To build a chart, the first thing you need is a data table. The data table contains the data for the chart as well as labels for each category. In general, data tables contain one column of labels followed by one or more columns of data series, with some variations. Read the documentation for the type of chart you’re creating to learn the exact data table format it expects. Here’s an example of how you’d create a data table in Apps Script to use with a column chart:
function doGet() {    
  // Populate the DataTable.  We'll have the data labels in   
  // the first column, "Quarter", and then add two data columns,  
  // for "Income" and "Expenses"  
  var dataTable = Charts.newDataTable()      
        .addColumn(Charts.ColumnType.STRING, "Quarter")      
        .addColumn(Charts.ColumnType.NUMBER, "Income")      
        .addColumn(Charts.ColumnType.NUMBER, "Expenses")      
        .addRow(["Q1", 50, 60])      
        .addRow(["Q2", 60, 55])      
        .addRow(["Q3", 70, 60])      
        .addRow(["Q4", 100, 50])      
        .build();

In the example above, we’ve hard-coded the data. You can also populate the table in any of these ways:
  • Fetch the data from an existing spreadsheet using SpreadsheetApp
  • With data from a UiApp form
  • Using our JDBC API
  • Using UrlFetch
  • Or any other way in which you can get an array of data using Apps Script.

Step 3 - Build a Chart using Data Table

Once you have the data table ready, you can start building the chart. Our top-level Charts class has Builders for each of the chart types we support. Each builder is configured for the specific chart you’re building, exposing only methods which are available for the specific chart type. For example, in a Line Chart you can make the angles smooth, in Bar and Column Charts you can stack up the data, and in Pie Charts you can make the whole chart 3D!Here’s an example of using the above data table to build a Column Chart:

// Build the chart.  We'll make income green and expenses red  
// for good presentation.  
var chart = Charts.newColumnChart()      
    .setDataTable(dataTable)      
    .setColors(["green", "red"])      
    .setDimensions(600, 400)      
    .setXAxisTitle("Quarters")      
    .setYAxisTitle("$")      
    .setTitle("Income and Expenses per Quarter")      
    .build();

In the above chart, the only required methods are setDataTable() and build(), all of the others are optional. If you don’t set colors and dimensions, for instance, we’ll pick some default values for you. Use the different setter methods to customize your chart, however and whenever you feel like it.

Step 4 - Publish your chart in Documents, Email, Sites or UiApp

Once you’ve built your chart, there are different things you can do with it. For example, you can add it to an Apps Script UI. You can add a chart to any part of the UI that takes a widget, including the application itself. The following code snippet shows you how to publish a chart with UiApp.

// Add our chart to the UI and return it so that we can publish  
// this UI as a service and access it via a URL.  
var ui = UiApp.createApplication();  
ui.add(chart);  
return ui;}

Charts can also be used as Blobs. This allows Charts to be attached to Sites pages, saved to your Docs List, or attached to outgoing emails. The code below does all three of these things:

// Save the chart to our Document List  
var file = DocsList.createFile(chart);  
file.rename("Income Chart");  
file.addToFolder(DocsList.getFolder("Charts"));        

// Attach the chart to the active sites page.  
var page = SitesApp.getActivePage();  
page.addHostedAttachment(chart, "Income Chart");   

 // Attach the chart to an email.  
MailApp.sendEmail(      
    "recipient@example.com",       
    "Income Chart",  // Subject      
    "Here's the latest income chart", // Content      
    {attachments: chart });

And that’s it. We hope you enjoy the new API. If your favorite chart is not here yet, or if you have ideas on how we could improve the API, please let us know in our forum. Finally, enjoy the income chart we’ve been building.


Gustavo Moura

Gustavo has been a Software Engineer at Google since 2007. He has been part of the Google Docs team since 2009. Prior to that, he worked on AdWords.


Want to weigh in on this topic? Discuss on Buzz

In March, we announced that we would start requiring clients to use SSL when making requests to the Google Documents List API, the Google Spreadsheets API, and the Google Sites API. This is part of our ongoing effort to increase the security of user data.

The time has come, and we are starting to roll out this requirement. On average, about 86% of requests to these APIs are already using SSL, so we expect there to be minimal migration required. The implementation will continue throughout September. If an application receives an HTTP 400 Bad Request response to a request, then it may be because the request was not made using HTTPS.

Clients that have not already started using SSL for all requests should do so immediately. This is as simple as upgrading to the latest version of the relevant API client library. Developers with questions should post in the API forums.



Ali Afshar profile | twitter

Ali is a Developer Programs engineer at Google, working on Google Docs and the Shopping APIs which help shopping-based applications upload and search shopping content. As an eternal open source advocate, he contributes to a number of open source applications, and is the author of the PIDA Python IDE. Once an intensive care physician, he has a special interest in all aspects of technology for healthcare.