Winter 2020 Speaking and Events

LiveRamp’s Engineering Team will be speaking at number of conferences and events in the coming weeks. You can see them speak at the following:   4th Annual Global Artificial Intelligence Conference Who: Akshaya Aradhya, Director of Engineering at LiveRamp When: January 21-23, 2020 Where: Santa Clara Convention Center The Global Big Data Conference’s 4th Annual […]

New to RampUp 2020: RampUp for Developers

Introducing RampUp for Developers RampUp is LiveRamp’s proprietary conference, where industry leaders discuss Mar Tech, customer experience and data regulation. This year’s conference will take place on Monday and Tuesday, March 2nd and 3rd, in San Francisco. This year, we’re introducing a new track; RampUp for Developers.  You can register to attend here. About RampUp […]

Joining Petabytes of Data Per Day: How LiveRamp Powers its Matching Product

Our data matching service processes ~10 petabytes of data per day and generates ~1 petabyte of compressed output every day. It continuously utilizes ~25k CPU cores and consumes ~50 terabytes of RAM on our Hadoop clusters. These numbers are growing as more and more data flows through our platform.  How can we efficiently process the […]

Migrating a Big Data Environment to the Cloud, Part 5

What next? In the previous posts about our migration, we asked ourselves: Why do we want to move to the cloud? What do we want our day 1 architecture to look like? How do we get there? How do we handle our bandwidth constraints? The last and most exciting questions are, “What comes next”?  “How […]

Migrating a Big Data Environment to the Cloud, Part 3

How do we get there? In part 2 of this series we discussed what we wanted our cloud MVP to look like.  The next question was — how do we get there without turning the company off for a month? We started with what we knew we needed.  For at minimum a few months, our […]

Migrating a Big Data Environment to the Cloud, Part 4

Copying to the cloud LiveRamp is in the midst of a massive migration of all of our infrastructure to GCP.  In our previous posts we talked about our migration and our decision to use Google as our cloud.  In this post, I want to zoom in on one major problem we needed to solve to […]