We extended the Gmail Add-on preview toward the end of last year, so that developers can bring the functionality of business apps they rely on directly into Gmail. Now, we’re also making it easier for you to develop and publish Gmail Add-ons domain-wide. Create add-ons for users to access tools directly in their inbox, like your company directory, HR tools or other CRM solutions.

You can now:
  • Publish Gmail Add-ons to users in your G Suite domain. This lets you build and deploy custom add-ons for workflows or processes that are unique to your company.
  • Install Gmail Add-ons for G Suite accounts before they’re published. This way, you can test your add-on before releasing widely in the workplace.
  • Plus, G Suite admins can install add-ons for their domains. This helps G Suite users perform domain-wide installs for any add-on you build, so more people have the chance to try your add-on (or experiment with add-ons domain-wide).
Check out this video to see what it’s like to build an add-on then start building it yourself with this hands-on codelab. You can also view already-built add-ons in the G Suite Marketplace. Or, if you have another project in mind, submit your add-on idea for consideration.

We’re on a mission to make work easier and a big part of that means giving you the tools you need to speed up workflows. Build on!



Apps Script has come a long way since we first launched scripting with Google Sheets. Now, Apps Script supports more than 5 million weekly active scripts that are integrated with a host of G Suite apps, and more than 1 billion daily executions.

As developers increasingly rely on Apps Script for mission-critical enterprise applications, we've redoubled our efforts to improve its power, reliability and operational monitoring, like our recently announced integration with Stackdriver for logging and error reporting. Today, we’re providing three new tools to help further improve your workflows and manage Apps Script projects:
  1. Apps Script dashboard, to help you manage, debug and monitor all of your projects in one place. 
  2. Apps Script API, so you can programmatically manage Apps Script source files, versions and deployments. 
  3. Apps Script Command Line Interface, for easy access to Apps Script API functionality from your terminal and shell scripts. 

Apps Script dashboard 

Over the next few weeks we’ll be making a new dashboard available to help you manage, debug and monitor all of your Apps Script projects from one place.
In this new dashboard—available at script.google.com—you will be able to:
  • View and search all of your projects. 
  • Monitor the health and usage of projects you care about. 
  • View details about individual projects. 
  • View a log of project executions and terminate long-running executions. 
Check out the documentation for more detail on the dashboard. If you encounter any issues, please use the feedback link in the left column of the new dashboard or file a bug.

Apps Script API 

The new Apps Script dashboard is built on top of a powerful new Apps Script API which replaces and extends the Execution API. This new Apps Script API provides a RESTful interface for developers to create, manage, deploy and execute their scripts from their preferred programming language. This gives you control to create development workflows and deployment strategies that fit your needs. With this new API, you can:
  • Create, read, update and delete script projects, source files and versions. 
  • Manage project deployments and entry points (web app, add-on, execution). 
  • Obtain project execution metrics and process data. 
  • Run script functions. 
To learn more about the new Apps Script API, check out the documentation. If you encounter any issues please ask a question on Stack Overflow or file a bug.

Apps Script Command Line Interface 

Lastly, we’re pleased to introduce the first open-source client of the Apps Script API, a command-line interface tool called clasp (Command Line Apps Script Projects). clasp allows you to access the management functionality of the Apps Script API with intuitive terminal commands and is available as an open-source project on GitHub.
clasp allows developers to create, pull and push Apps Script projects, plus manage deployments and versions with terminal commands and shell scripts. clasp also allows you to write and maintain your Apps Script projects using the development tools of your choice including your native IDE, Git and GitHub.

To get started, try the clasp codelab. You can file issues or ask questions on the clasp project GitHub page.

We’re doubling down on powerful platforms like Apps Script. We hope these new additions help ease your development process.


Google Apps Script has always provided a simple logging tool—the Logger service—to help developers better test their scripts. This works for many simple use cases, but developers need other ways to log messages and errors, particularly when:
  • Troubleshooting or analyzing scripts across multiple executions
  • Working on a script or add-on with multiple users 
  • Looking for trends or insights about their scripts and users
To make Apps Script a friendlier environment for developers, we are announcing general availability of a new integration with Google Stackdriver. This is in addition to the pre-existing Logger service, which is still available.

Using Stackdriver Logging in Google Apps Script

Log messages can now be sent to Stackdriver Logging using the familiar console.log(), console.info(), etc. functions. You can also instruct Apps Script to log messages and stack traces for all exceptions, which also become available for analysis in Stackdriver Error Reporting by simply checking a box. No need to add a single extra line of code.

In Stackdriver, logs are kept for 7 days for free, and the premium tier offers 30-day retention. Powerful search and filtering are available to quickly find log entries by text content or metadata, and developers can also choose to export logs to BigQuery, Cloud Storage, and Cloud Pub/Sub for further analysis, long term conservation, and custom workflows.

Log messages and errors are reported for all users of a script, with a unique but obfuscated identifier assigned to each user. This means log entries can be aggregated anonymously per user, for example allowing developers to count unique users impacted by an issue or analyze user behavior, but without logging users’ personally identifying information.


Developers get some of these aggregated analyses for free. In the Stackdriver Error Reporting tab of the developer console, you can see recurring errors and the numbers of users impacted. You can even subscribe to receive an email alert when a new type of error is detected.


How developers are using Stackdriver Logging

Developers of scripts and add-ons have started to rely more on more on this new logging capability. Romain Vialard, creator of Yet Another Mail Merge, a popular Google Sheets add-on, is using Stackdriver Logging to time the execution of his add-on, exporting data to BigQuery to perform aggregations and analyze trends. Read this tutorial to learn how to export logs to BigQuery and run queries to analyze how users are interacting with your script.

Stackdriver Logging is one of the ways we’re making Apps Script a more manageable platform for developers. We hope that it and other features coming soon make Apps Script developers more productive and their scripts, add-ons and integrations more robust.

You can read more about how to enable and use the Stackdriver integration by reading Apps Script’s logging documentation.

About the authors 

Romain Vialard is a Google Developer Expert. After some years spent as a consultant, he is now focused on products for G Suite (formerly Google Apps) users, including add-ons such as Yet Another Mail Merge and Form Publisher.

Paul McReynolds is a Product Manager at Google focused on Apps Script and G Suite Marketplace. Previously a startup founder and CTO, Paul believes that the easy things need to be easy or the hard things don’t get done. At Google, he's excited to be a part of the company that makes solving problems for business fun again.

Editor's note: Yet Another Mail Merge and Form Publisher are not created, sponsored, or supported by Google.

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite

The G Suite team recently launched the very first Google Slides API, opening up a whole new set of possibilities, including leveraging data already sitting in a spreadsheet or database, and programmatically generating slide decks or slide content based on that data. Why is this a big deal? One of the key advantages of slide decks is that they can take database or spreadsheet data and make it more presentable for human consumption. This is useful when the need arises to communicate the information reflected by that data to management or potential customers.

Walking developers through a short application demonstrating both the Sheets and Slides APIs to make this happen is the topic of today's DevByte video. The sample app starts by reading all the necessary data from the spreadsheet using the Sheets API. The Slides API takes over from there, creating new slides for the data, then populating those slides with the Sheets data.

Developers interact with Slides by sending API requests. Similar to the Google Sheets API, these requests come in the form of JSON payloads. You create an array like in the JavaScript pseudocode below featuring requests to create a cell table on a slide and import a chart from a Sheet:


var requests = [
   {"createTable": {
       "elementProperties":
           {"pageObjectId": slideID},
       "rows": 8,
       "columns": 4
   }},
   {"createSheetsChart": {
       "spreadsheetId": sheetID,
       "chartId": chartID,
       "linkingMode": "LINKED",
       "elementProperties": {
           "pageObjectId": slideID,
           "size": {
               "height": { ... },
               "width": { ... }
           },
           "transform": { ... }
       }
   }}
];
If you've got at least one request, say in a variable named requests (as above), including the Sheet's sheetID and chartID plus the presentation page's slideID. You'd then pass it to the API with just one call to the presentations().batchUpdate() command, which in Python looks like the below if SLIDES is your API service endpoint:
SLIDES.presentations().batchUpdate(presentationId=slideID,
       body=requests).execute()

Creating tables is fairly straightforward. Creating charts has some magical features, one of those being the linkingMode. A value of "LINKED" means that if the Sheet data changes (altering the chart in the Sheet), the same chart in a slide presentation can be refreshed to match the latest image, either by the API or in the Slides user interface! You can also request a plain old static image that doesn't change with the data by selecting a value of "NOT_LINKED_IMAGE" for linkingMode. More on this can be found in the documentation on creating charts, and check out the video where you'll see both those API requests in action.

For a detailed look at the complete code sample featured in the video, check out the deep dive post. We look forward to seeing the interesting integrations you build with the power of both APIs!

Posted by Wesley Chun (@wescpy), Developer Advocate, G Suite
At Google I/O earlier this year, we launched a new Google Sheets API (click here to watch the entire announcement). The updated API includes many new features that weren't available in previous versions, including access to more functionality found in the Sheets desktop and mobile user interfaces. Formatting cells in Sheets is one example of something that wasn't possible with previous versions of the API and is the subject of today's DevByte video.
In our previous Sheets API video, we demonstrated how to get data into and out of a Google Sheet programmatically, walking through a simple script that reads rows out of a relational database and transferring the data to a new Google Sheet. The Sheet created using the code from that video is where we pick up today.

Formatting spreadsheets is accomplished by creating a set of request commands in the form of JSON payloads, and sending them to the API. Here is a sample JavaScript Object made up of an array of requests (only one this time) to bold the first row of the default Sheet automatically created for you (whose ID is 0):

{"requests": [
    {"repeatCell": {
        "range": {
            "sheetId": 0,
            "startRowIndex": 0,
            "endRowIndex": 1
        },
        "cell": {
            "userEnteredFormat": {
                "textFormat": {
                    "bold": true
                }
            }
        },
        "fields": "userEnteredFormat.textFormat.bold"
    }}
]}
With at least one request, say in a variable named requests and the ID of the sheet as SHEET_ID, you send them to the API via an HTTP POST to https://sheets.googleapis.com/v4/spreadsheets/{SHEET_ID}:batchUpdate, which in Python, would be a single call that looks like this:
SHEETS.spreadsheets().batchUpdate(spreadsheetId=SHEET_ID,
        body=requests).execute()

For more details on the code in the video, check out the deepdive blog post. As you can probably guess, the key challenge is in constructing the JSON payload to send to API calls—the common operations samples can really help you with this. You can also check out our JavaScript codelab where we guide you through writing a Node.js app that manages customer orders for a toy company, featuring the toy orders data we looked at today but in a relational database. While the resulting equivalent Sheet is featured prominently in today's video, we will revisit it again in an upcoming episode showing you how to generate slides with spreadsheet data using the new Google Slides API, so stay tuned for that!

We hope all these resources help developers enhance their next app using G Suite APIs! Please subscribe to our channel and tell us what topics you would like to see in other episodes of the G Suite Dev Show!



We know many of you consider your mobile device as your primary tool to consume business information, but what if you could use it to get more work done, from anywhere? We’re excited to introduce Android add-ons for Docs and Sheets, a new way for you to do just that—whether it’s readying a contract you have for e-signature from your phone, or pulling in CRM data on your tablet for some quick analysis while waiting for your morning coffee, Android add-ons can help you accomplish more.

Get more done with your favorite third-party apps, no matter where you are We’ve worked with eight integration partners who have created seamless integrations for Docs and Sheets. Here’s a preview of just a few of them:
  • DocuSign - Trigger or complete a signing process from Docs or Sheets, and save the executed document to Drive. Read more here.
DocuSign lets you easily create signature envelopes right from Google Docs
  • ProsperWorks - Import your CRM data to create and update advanced dashboards, reports and graphs on Sheets, right from your device. Read more here.
  • AppSheet - Create powerful mobile apps directly from your data in Sheets instantly — no coding required. Read more here.
  • Scanbot - Scan your business documents using built-in OCR, and insert their contents into Docs as editable text. Read more here.


You can find these add-ons and many more, including PandaDoc, ZohoCRM, Teacher Aide, EasyBib and Classroom in our Google Play collection as well as directly from the add-on menus in Docs or Sheets.


Try them out today, and see how much more you can do.

Calling all developers: try our developer preview today!

As you can see from above, Android add-ons offer a great opportunity to build innovative integrations and reach Docs and Sheets users around the world. They’re basically Android apps that connect with Google Apps Script projects on the server-side, allowing them to access and manipulate data from Google Docs or Sheets using standard Apps Script techniques. Check out our documentation which includes UI guidelines as well as sample code to get you started. We’ve also made it easy for you to publish your apps with the Apps Script editor.

Android add-ons are available today as a developer preview. We look forward to seeing what you build!

Posted by Wesley Chun (@wescpy), Developer Advocate, Google Apps

At Google I/O 2016, we launched a new Google Sheets API—click here to watch the entire announcement. The updated API includes many new features that weren’t available in previous versions, including access to functionality found in the Sheets desktop and mobile user interfaces. My latest DevByte video shows developers how to get data into and out of a Google Sheet programmatically, walking through a simple script that reads rows out of a relational database and transferring the data to a brand new Google Sheet.

Let’s take a sneak peek of the code covered in the video. Assuming that SHEETS has been established as the API service endpoint, SHEET_ID is the ID of the Sheet to write to, and data is an array with all the database rows, this is the only call developers need to make to write that raw data into the Sheet:


SHEETS.spreadsheets().values().update(spreadsheetId=SHEET_ID,
    range='A1', body=data, valueInputOption='RAW').execute()
Reading rows out of a Sheet is even easier. With SHEETS and SHEET_ID again, this is all you need to read and display those rows:
rows = SHEETS.spreadsheets().values().get(spreadsheetId=SHEET_ID,
    range='Sheet1').execute().get('values', [])
for row in rows:
    print(row)

If you’re ready to get started, take a look at the Python or other quickstarts in a variety of languages before checking out the DevByte. If you want a deeper dive into the code covered in the video, check out the post at my Python blog. Once you get going with the API, one of the challenges developers face is in constructing the JSON payload to send in API calls—the common operations samples can really help you with this. Finally, if you’re ready to get going with a meatier example, check out our JavaScript codelab where you’ll write a sample Node.js app that manages customer orders for a toy company, the database of which is used in this DevByte, preparing you for the codelab.

We hope all these resources help developers create amazing applications and awesome tools with the new Google Sheets API! Please subscribe to our channel, give us your feedback below, and tell us what topics you would like to see in future episodes!

Ever look at the data returned when using the Drive API? A files.list call, even if just returning a single file, can yield upwards of 4kb of data. Drive has a rich set of metadata about files, but chances are your application only needs a small fraction of what’s available.

One of the simplest but most effective optimizations you can make when building apps with the Drive API is limiting the amount of data returned to only those fields needed for your particular use case. The fields query parameter gives you that control, and the results can be dramatic.

A simple example of this is using the files.list call to display a list of files to a user. The naive query, https://www.googleapis.com/drive/v2/files?maxResults=100, generated more than 380kb of data when I ran it against my own corpus. But to render this list nicely, an app only needs a few bits of information -- the document title, icon & thumbnail URLs, the mime type, and of course the file ID.

Using the fields query parameter, the results can be trimmed to just the necessary fields and those needed for fetching subsequent pages of data. The optimized query is https://www.googleapis.com/drive/v2/files?maxResults=100&fields=items(iconLink%2Cid%2Ckind%2CmimeType%2CthumbnailLink%2Ctitle)%2CnextPageToken.

After modifying the query the resulting data was only 30k. That’s more than a 90% reduction in data size! Besides reducing the amount of data on the wire, these hints also enable us to further optimize how queries are processed. Not only is there less data to send, but also less time spent getting it in the first place.



Steven Bazyl   profile | twitter

Steve is a Developer Advocate for Google Drive and enjoys helping developers build better apps.

Our newest set of APIs - Tasks, Calendar v3, Google+ to name a few - are supported by the Google APIs Discovery Service. The Google APIs Discovery service offers an interface that allows developers to programmatically get API metadata such as:

  • A directory of supported APIs.
  • A list of API resource schemas based on JSON Schema.
  • A list of API methods and parameters for each method and their inline documentation.
  • A list of available OAuth 2.0 scopes.

The APIs Discovery Service is especially useful when building developer tools, as you can use it to automatically generate certain features. For instance we are using the APIs Discovery Service in our client libraries and in our APIs Explorer but also to generate some of our online API reference.

Because the APIs Discovery Service is itself an API, you can use features such as partial response which is a way to get only the information you need. Let’s look at some of the useful information that is available using the APIs Discovery Service and the partial response feature.

List the supported APIs

You can get the list of all the APIs that are supported by the discovery service by sending a GET request to the following endpoint:

https://www.googleapis.com/discovery/v1/apis?fields=items(title,discoveryLink)

Which will return a JSON feed that looks like this:

{
    "items": [
        …
        {
            "title": "Google+ API",
            "discoveryLink": "./apis/plus/v1/rest"
        },
        {
            "title": "Tasks API",
            "discoveryLink": "./apis/tasks/v1/rest"
        },
        {
            "title": "Calendar API",
            "discoveryLink": "./apis/calendar/v3/rest"
        },
        …
    ]
}

Using the discoveryLink attribute in the resources part of the feed above you can access the discovery document of each API. This is where a lot of useful information about the API can be accessed.

Get the OAuth 2.0 scopes of an API

Using the API-specific endpoint you can easily get the OAuth 2.0 scopes available for that API. For example, here is how to get the scopes of the Google Tasks API:

https://www.googleapis.com/discovery/v1/apis/tasks/v1/rest?fields=auth(oauth2(scopes))

This method returns the JSON output shown below, which indicates that https://www.googleapis.com/auth/tasks and https://www.googleapis.com/auth/tasks.readonly are the two scopes associated with the Tasks API.

{
    "auth": {
        "oauth2": {
            "scopes": {
                "https://www.googleapis.com/auth/tasks": {
                    "description": "Manage your tasks"
                },
                "https://www.googleapis.com/auth/tasks.readonly": {
                    "description": "View your tasks"
                }
            }
        }
    }
}

Using requests of this type you could detect which APIs do not support OAuth 2.0. For example, the Translate API does not support OAuth 2.0, as it does not provide access to OAuth protected resources such as user data. Because of this, a GET request to the following endpoint:

https://www.googleapis.com/discovery/v1/apis/translate/v2/rest?fields=auth(oauth2(scopes))

Returns:

{}

Getting scopes required for an API’s endpoints and methods

Using the API-specific endpoints again, you can get the lists of operations and API endpoints, along with the scopes required to perform those operations. Here is an example querying that information for the Google Tasks API:

https://www.googleapis.com/discovery/v1/apis/tasks/v1/rest?fields=resources/*/methods(*(path,scopes,httpMethod))

Which returns:

{
    "resources": {
        "tasklists": {
            "methods": {
                "get": {
                    "path": "users/@me/lists/{tasklist}",                         
                    "httpMethod": "GET",
                    "scopes": [
                        "https://www.googleapis.com/auth/tasks",
                        "https://www.googleapis.com/auth/tasks.readonly"
                    ]
                },
                "insert": {
                    "path": "users/@me/lists",
                    "httpMethod": "POST",
                    "scopes": [
                        "https://www.googleapis.com/auth/tasks"
                    ]
                },
                …
            }
        },
        "tasks": {
            …
        }
    }
}

This tells you that to perform a POST request to the users/@me/lists endpoint (to insert a new task) you need to have been authorized with the scope https://www.googleapis.com/auth/tasks and that to be able to do a GET request to the users/@me/lists/{tasklist} endpoint you need to have been authorized with either of the two Google Tasks scopes.

You could use this to do some automatic discovery of the scopes you need to authorize to perform all the operations that your applications does.

You could also use this information to detect which operations and which endpoints you can access given a specific authorization token ( OAuth 2.0, OAuth 1.0 or Authsub token). First, use either the Authsub Token Info service or the OAuth 2.0 Token Info Service to determine which scopes your token has access to (see below); and then deduct from the feed above which endpoints and operations requires access to these scopes.

                        
[Access Token] -----(Token Info)----> [Scopes] -----(APIs Discovery)----> [Operations/API Endpoints]

Example of using the OAuth 2.0 Token Info service:

Request:

GET /oauth2/v1/tokeninfo?access_token= HTTP/1.1
Host: www.googleapis.com

Response:

HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
…

{
    "issued_to": "1234567890.apps.googleusercontent.com",
    "audience": "1234567890.apps.googleusercontent.com",
    "scope": "https://www.google.com/m8/feeds/ 
              https://www.google.com/calendar/feeds/",
    "expires_in": 1038
}

There is a lot more you can do with the APIs Discovery Service so I invite you to have a deeper look at the documentation to find out more.


Nicolas Garnier profile | twitter | events

Nicolas joined Google’s Developer Relations in 2008. Since then he's worked on commerce oriented products such as Google Checkout and Google Base. Currently, he is working on Google Apps with a focus on the Google Calendar API, the Google Contacts API, and the Tasks API. Before joining Google, Nicolas worked at Airbus and at the French Space Agency where he built web applications for scientific researchers.

Thanks to everyone who participated in the first Marketing Test Kitchen initiative: “Add to Apps" button. Overall, it was a huge success. The number of vendors using “Add to Apps” buttons grew significantly, causing a large increase in installs driven by button traffic. Before kicking off the second Apps Ecosystem Marketing Test Kitchen initiative, we want to recognize the winners of the first one.

Congratulations to the 6 winners, who will get additional exposure on the featured and notable section of the Marketplace front page:
Outright, Producteev, Insync, Mavenlink, Zoho and Manymoon

Established vendors such as Manymoon and Zoho improved performance of existing buttons and newer folks like Outright and Producteev added buttons to capture new business. If you didn’t get your button up for last week’s contest, that doesn't mean you shouldn’t do it now! Adding a button helps improve your overall performance in the Marketplace and will prepare you for future initiatives.

Now let’s take a look at the next Marketing Test Kitchen...

The Next Challenge:
Publish your most compelling customer success stories by Thursday, Dec 2nd on your own blog and share it with us at marketing-test-kitchen@google.com. We will feature a few of the top stories on the Google Enterprise Blog (see examples here and here) and also rotate the winning vendors into the featured and notable sections on the Marketplace front page. Note we will feature every submission in the Marketplace Success Stories blog, so just by submitting a story you will end up on the front page of the Marketplace.

It’s easy to participate: Find a compelling customer, tell their story, publish it on your blog, share it with us, and track your performance.

What makes a compelling customer?
It is important to find a customer that demonstrates the value of your integrated features with Google Apps. Make sure that your customer gives explicit approval for using their story. Here are some qualities of a compelling customer.
  • Highlights the value of your app: For example, their use of your app in conjunction with various other web apps, such as other Marketplace apps.
  • Hard data to support success: Numbers that justify strong gains are important, ie: 50% productivity gains, 10% increase in revenue, 20% reduction in IT costs.
  • Passionate about Google Apps and the cloud: A genuinely passionate customer can explain the advantages of a cloud-based business and more easily help prospects understand and transition.
How can I make it easily consumable?
You can use the standard template from the developer site or find a more creative way to deliver it. You can create your own format that tells the story of the customer’s success. Here are some ideas to go beyond a typical blog post:
  • Be visual: Use tools such as Picnik and Aviary to tell your story with compelling visuals (or choose another creative tool).
  • Organize your presentation: You can use Google Presentations or SlideRocket to succinctly tell your story.
  • Use video: Shoot or animate a video of your customer telling their Apps Marketplace story.
  • Be creative: Combine the above ideas, write a story, or come up with something totally different.
To get a feel for different tones and stories, read some customer stories from various vendors on the Marketplace Success Stories blog. Also check out this example of a strong customer story that uses many of the above elements.



It’s easy to be a part of this new Marketing Test Kitchen. Just find a compelling customer, use a clever way to tell their story, publish it to your blog and share it by email. If you need more time, email us with your ideas as well! Make sure to track the performance of your blog post (and all other marketing efforts) through Google Analytics, learn how to code links and track traffic on the developer site.

Come up with the next Marketing Test Kitchen: Submit your idea via Buzz or email. We’ll evaluate the ideas and use the best ones for future initiatives. If we choose your initiative, we’ll give you a special prize.

Posted by Harrison Shih, Associate Product Marketing Manager, Google Apps Marketplace

Want to weigh in on this topic? Discuss on Buzz

The Google Apps Marketplace team is always looking for ways to help its vendors add new users and improve installation metrics. In order to help achieve these goals, we have launched the Apps Ecosystem Marketing Kitchen. Through experimentation, we want to collectively identify, test, and share best marketing practices for business web app.

The first initiative we cooked up is designed to help you, as a vendor, minimize the abandonment rate of Marketplace prospects as they bounce around your Marketplace property and various product pages without clear a “call to action”.


The Challenge:
The vendors who drive the most traffic and installs to their Marketplace listing page through their “Add to Apps” button between Nov 9th - 16th will be included in the front page Featured and Notable sections on the Apps Marketplace site.


Why participate in the challenge and use an “Add to Apps” button?
The “Add to Apps” button will improve your listing’s performance.
  1. Increase Conversions: Reduces the risk of users getting lost while navigating between the Marketplace and your website, which will result in a better experience for users and more installs for you.
  2. More Accurate tracking: Properly encoding the button URL using Google Analytics will enhance data-driven tracking so that we can better work together to understand and improve the user acquisition funnel.
  3. Bonus - Get a front page feature: The six (6) top traffic/install drivers during the challenge time-frame will be featured on the homepage. We will list:
    • The top two (2) traffic/installs drivers by pure volume.
    • The top four (4) in traffic/install growth from previous weeks.
If you already use an “Add to Apps” button, then you are one step ahead. If not, add one to get in on the challenge. We will start tracking on November 9th, so you’ll want to get started properly coding and testing your button.


How do I participate and succeed in this test kitchen challenge?
  1. Add the “Add to Apps” button properly.
  2. Make sure the page is the “Vendor product home page” link in your listing on the Marketplace
  3. Use marketing techniques to drive traffic through your landing page and potentially get featured on our front page.
  4. Use analytics to check visits made through the “button” medium, and see your traffic flowing to your app.

This test kitchen should be an exciting way to cook up some tasty campaigns. If you have any good ideas or suggestions, pass them along to marketing-test-kitchen@google.com. Check for new challenges on this blog. In order to stay on top of any news on initiatives, also follow our Buzz, Twitter, and subscribe to our email list.

Posted by Harrison Shih, Associate Product Marketing Manager, Google Apps Marketplace

Want to weigh in on this topic? Discuss on Buzz

It’s been almost four years since the Calendar API has supported the JSON format. However, our existing JSON format isn’t perfect. It is very much an automatic translation from our Atom format and as a result it is very wordy and lacks the elegance that a native JSON dialect would offer. It also supports only read operations.

We have made our new JSON implementation cleaner, simpler and closer to what you would expect from JSON. For example, the long XML namespace prefixes are no more, and we've removed many pieces of metadata specific to Atom documents that come across as noise in JSON, making it easier to parse.

We’re calling this new format JSON-C. One of the major advantages of the JSON-C format, besides being read-write and more readable than the former JSON implementation, is that it is more compact than the Atom based format. Below is an example:

Creating an event using JSON-C

POST /calendar/feeds/default/private/full HTTP/1.1
Host: www.google.com
Authorization: ...
Content-Type: application/json
GData-Version: 2.0
Content-Length: 233

{
"data": {
"title": "Tennis with Beth",
"details": "Meet for a quick lesson.",
"transparency": "opaque",
"status": "confirmed",
"location": "Rolling Lawn Courts",
"when": [
{
"start": "2010-04-17T15:00:00.000Z",
"end": "2010-04-17T17:00:00.000Z"
}
]
}
}

Creating an event using Atom

POST /calendar/feeds/default/private/full HTTP/1.1
Host: www.google.com
Authorization: ...
Content-Type: application/atom+xml
GData-Version: 2.0
Content-Length: 571

<entry xmlns='http://www.w3.org/2005/Atom' xmlns:gd='http://schemas.google.com/g/2005'>
<category scheme='http://schemas.google.com/g/2005#kind'
term='http://schemas.google.com/g/2005#event'/>
<title type='text'>Tennis with Beth</title>
<content type='text'>Meet for a quick lesson.</content>
<gd:transparency value='http://schemas.google.com/g/2005#event.opaque'/>
<gd:eventStatus value='http://schemas.google.com/g/2005#event.confirmed'/>
<gd:where valueString='Rolling Lawn Courts'/>
<gd:when startTime='2006-04-17T15:00:00.000Z' endTime='2006-04-17T17:00:00.000Z'/>
</entry>
In the example above the body of the request is 59% smaller in JSON-C than in Atom. If you use gzip compression, the saving is still 37% of the size of the Atom body. This could make a big difference in mobile or other bandwidth-constrained environments.

To retrieve events or other data in the JSON-C format, you have to specify the ‘alt’ URL parameter with the value ‘jsonc’ as shown below:

Requesting an event in JSON-C

GET /calendar/feeds/default/private/full/1234567890?alt=jsonc HTTP/1.1
Host: www.google.com
Authorization: ...
GData-Version: 2.0

Response

HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
...
{
"apiVersion": "2.3",
"data": {
"title": "Tennis with Beth",
"details": "Meet for a quick lesson.",
"location": "Rolling Lawn Courts",
...
}
}
For the request above, the body of the response is 53% smaller in JSON-C than in Atom - 30% smaller when using gzip compression.

To learn more about our new JSON-C format please read our updated Developer’s Guide. Have fun!

Want to weigh in on this topic? Discuss on Buzz