Jira to Redshift

Howdy!  If you’ve made is this far it must be because you want to move your Jira data to your Redshift data warehouse.   This page will provide you with instructions on how to extract data from Jira and load it intoRedshift. (If this manual process sounds like a big project, check out Stitch, which can immediately save you from any further headaches.)

Pulling Data Out of Jira

For starters, you need to get your data out of Jira.  This can be done by making calls to Jira’s REST API or Webhooks. We’ll focus on the REST API here because it will allow you to retrieve all of your historical data and not simply to new real-time data.

Details of setting up the right environment for developing your Jira integration can be found here.  To use a REST API, your application will make an HTTP request and parse the response. The JIRA REST API uses JSON as its communication format, and the standard HTTP methods like GET, PUT, POST and DELETE are going to be your major tools here.

Jira’s API offers access to data endpoints like issues, comments, and numerous other endpoints. Using methods outlined in their API documentation, you can retrieve the data you’d like to pipe into Redshift.

Sample Jira Data

Once you successfully query the Marketo API, it will return JSON formatted data.  Below is an example response from the opportunities endpoint.

{
    "expand": "schema,names",
    "startAt": 0,
    "maxResults": 50,
    "total": 6,
    "issues": [
        {
            "expand": "html",
            "id": "10230",
            "self": "http://kelpie9:8081/rest/api/2/issue/BULK-62",
            "key": "BULK-62",
            "fields": {
                "summary": "testing",
                "timetracking": null,
                "issuetype": {
                    "self": "http://kelpie9:8081/rest/api/2/issuetype/5",
                    "id": "5",
                    "description": "The sub-task of the issue",
                    "iconUrl": "http://kelpie9:8081/images/icons/issue_subtask.gif",
                    "name": "Sub-task",
                    "subtask": true
                },
.
.
.
                },
                "customfield_10071": null
            },
            "transitions": "http://kelpie9:8081/rest/api/2/issue/BULK-62/transitions",
        },
        {
            "expand": "html",
            "id": "10004",
            "self": "http://kelpie9:8081/rest/api/2/issue/BULK-47",
            "key": "BULK-47",
            "fields": {
                "summary": "Cheese v1 2.0 issue",
                "timetracking": null,
                "issuetype": {
                    "self": "http://kelpie9:8081/rest/api/2/issuetype/3",
                    "id": "3",
                    "description": "A task that needs to be done.",
                    "iconUrl": "http://kelpie9:8081/images/icons/task.gif",
                    "name": "Task",
                    "subtask": false
                },
.
.
.
                  "transitions": "http://kelpie9:8081/rest/api/2/issue/BULK-47/transitions",
        }
    ]
}

Preparing Jira Data for Redshift

Once you get your data from the Jira API in JSON format, it’s time to start thinking about mapping it into Redshift.  For each value in the response, you need to identify a predefined datatype (i.e. INTEGER, DATETIME, etc.) and build a table in your database that can receive them.

The Jira API documentation can give you a good sense of what fields will be provided by each endpoint, along with their corresponding data types. Once you have identified all of the columns you will want to insert, use the CREATE TABLE statement in Redshift to create a table that can receive all of this data.

Inserting Jira Data into Redshift

It may seem like the easiest way to add your data is to build tried-and-true INSERT statements that add data to your Redshift table row-by-row. If you have any experience with SQL, this will be your gut reaction and it will work but isn’t the most efficient way to get the job done.

Redshift actually offers some good documentation for how to best bulk insert data into new tables. The COPY command is particularly useful for this task, as it allows you to insert multiple rows without needing to build individual INSERT statements for each row.

If you cannot use COPY, it might help to use PREPARE to create a prepared INSERT statement, and then use EXECUTE as many times as required. This avoids some of the overhead of repeatedly parsing and planning INSERT.

Keeping Data Up-To-Date

Ok, now what to do?  You are pulling data from Jira, and moving it to Redshift.  Problem solved right?  Actually, it’s time to make a plan for Monday, when there will be 15 new issues and 10 more that were updated or changed over the weekend.

What you want to do is build your scripts so that they can identify incremental updates to your data. Luckily, Jira’s API results include fields like created_at that allow you to quickly identify records that are new since your last update (or since the newest record you’ve copied into Redshift). You can set your script up as a cron job to keep pulling down new data as it appears.

Other Data Warehouse Options

Redshift is totally awesome, but sometimes you need to start smaller or optimize for different things. In this case, many people choose to get started with Postgres, which is an open source RDBMS that uses nearly identical SQL syntax to Redshift. If you’re interested in seeing the relevant steps for loading this data into Postgres, check out Jira to Postgres

Easier and Faster Alternatives

If you have all the skills necessary to go through this process, you  might have other projects that you need to be focusing on.

Luckily, powerful tools like Stitch were built to solve this problem automatically. With just a few clicks, Stitch starts extracting your Jira data via the API, structuring it in a way that is optimized for analysis, and inserting that data into your Redshift data warehouse.