Back to Engineering Blog

RampUp 2020: RampUp for Developers Recap – Migration to Google Cloud Platform (GCP): Overview, Benefits, and Lessons Learned

  • 26 min read

Speakers:

  • Kelsey Hightower – Principal Developer Advocate at Google Cloud Platform
  • Sasha Kipervarg – Head of Global Cloud Operations at LiveRamp
  • Joshua Kwan – Engineering Lead, DevOps at LiveRamp
  • Patrick Raymond – Sr. Product Manager Infrastructure at LiveRamp

Our first session at RampUp was a panel about our migration to the Google Cloud Platform. LiveRamp engineers moved a 79,800 core cluster with 90 PB of disk space and 256 TB of RAM necessary to power our systems. The team overcame many obstacles, yielding multiple benefits, including: redundant Google data centers and built-in capabilities that replicate our data, cloud native architecture and security technology within Google’s cloud, and many others. The panel was moderated by Google Cloud Platform’s Kelsey Hightower.

You can read the transcript below. You can download the slides from the presentation (which also appear throughout the video) here.

Panel Discussion:

Joshua Kwan: Cool. So before kicking off with the discussion, I’d like to share a little bit about kind of the background of what we’ll be talking about today. So when we are talking about the scale of our migration, that’s one of the most important factors for us. So before GCP, we were all in an on-premise data center down on Third Street in Soma. We were the majority – I think we were the majority occupant of this hosting facility. And I don’t know even know what they’re doing now, but we had 90000 cores, 300 terabytes of memory, 100 petabytes of storage, hundreds of thousands of MapReduce jobs every day. And 13 petabytes of IO every day, 500 plus VMs. I think that’s a bit of an underestimate. But let’s just say 500 to 2000 VMs in our vSphere. And in terms of real time traffic, we were also getting 200000 queries per second to our real time integrations over HTTPS.
Joshua Kwan: But so, to talk about what that stack actually did, I don’t want to go into a huge discussion of what LiveRamp does, but ultimately all types of data signals come into our environment. We join them on common keys and use that to create insights that go out to our customers like ad networks in DSPs and the major technologies in there, obviously what powered that was Hadoop, VMware and CHEF just to kind of orchestrate the whole thing. And that was us right before the migration. And I think I’ll stop there for the context. I think that’s enough to get a good discussion going.
Kelsey Hightower: So I’m going to be moderating today and I’m from the Google Cloud side and I’ve probably met hundreds of customers over the last four years. I’ve contributed to a lot of the technologies either on this screen or some of the ones we adopted in GCP. And I know developers face many of these similar challenges. So if you’re running the business or you’re developing in the business, this idea of when to go to cloud and why, that’s going to be the context and foundation of our discussion today. We’re going to have lots of time for questions and answers towards the end. But we’re going to kick things off with a high level overview of what pushed them to the cloud and what challenges did they experience in making this migration.
Kelsey Hightower: And hopefully we’ll find out if it was successful or not. Hint I wouldn’t be here if it wasn’t successful.
Joshua Kwan: Hint, we did it.
Kelsey Hightower: If it didn’t work, I’m not coming. All right, so we’re going to kick off this first question. I guess you kind of set the stone … stage around the scalability. So why did you ultimately choose the cloud? Why not just build another data center? Why not scale and buy new hardware in your on-premise.
Sasha Kipervarg: So maybe I’ll take that from my level first. So what was interesting about what Joshua presented was the technical part of it; the execution part of it. But prior to all of that, as many of you who are at large companies know there’s a lot of alignment to happen; there’s a lot of decision making about what to do and why to do it. And that took about a year for us. And ultimately the reason we wanted to move to the cloud was we wanted to provide our customers better service. And for us, we were in a position where we were a startup. We had grown very quickly. We did not pay as much attention to the other things so that we could go fast. And we outgrew ourselves in this data center here in San Francisco.
Sasha Kipervarg: And it was fine when we were a startup, but as we grew quickly – as our customer needs expanded, as the number of customers expanded – we couldn’t just build another data center. The capital investment would have been just enormous because we had 3000 Hadoop servers and if you do the mental math – 10, 15 grand a server, the number of people required to operate it – it just made no sense for us. The other reason at a high level that we wanted to move to the cloud was APIs. At the time what we were doing were, we were deploying a bunch of VMs and then we were using the V-Center API to automate a lot of that work and ultimately we had this philosophy at LiveRamp – and we still have it today – where we want development teams to go really fast.
Sasha Kipervarg: We want them to deliver product as fast as possible and that means being as independent as possible as well. And you can kind of get there with a V-center API, but you can get there much faster with a cloud API where every single product, every single service is driven by code. And so maybe at this point it makes sense to sort of hand off this answer to Josh and Patrick to talk about it at their altitudes.
Joshua Kwan: Let me riff on that a little. I think what really struck me when you … is when you said we couldn’t move fast and as someone who services other development teams, I sensed that because we were constrained by the way our data center was constructed, we … everything had to be in terms of Linux VM that was provisioned by CHEF and that kind of just, you know … of course you could manually configure it to do things, but there was a whole world of prebuilt solutions out there that we couldn’t take advantage of if we were all in the data center.
Patrick Raymond: And I think another point that was interesting for us was that we didn’t really have a great understanding of what our cost of our product was. And so being able to look at a granular level cost information and build products that we would’ve never built in a data center were also two huge motivators for us.
Kelsey Hightower: So on this whole, so you built all this automation, a lot of people, I’ve seen this over the time, you probably put a lot of work into building these integrations; automating V-center, getting really good at CHEF, and you could tell people that: “Hey, we’re moving to the cloud.” Do you port a lot of that work or do you kind of abandoned ship and start over?
Joshua Kwan: I’ll kick this one off. So we evaluated very carefully what we should do while we stirred up all the dust with the old data center and made people do change anyway, we decided; what do we want to keep out of this and what do we want to change? And actually there is a slide for how we adapted all of the concepts from our data center and just tried to nudge developers just enough so that everyone would kind of adapt like new technologies in doing so. So basically, we saw this as a great opportunity to adopt containers. So basically of all the setup that was done with CHEF, now we’re talking docker and then at sort of macro orchestration level we’re talking Terraform to provision out all our infrastructure.
Joshua Kwan: And of course, needless to say, VMware just becomes, Google Compute, Google Kubernetes, and so at the same time we jumped to containers, we orchestrated those containers with Kubernetes and that was a huge win compared to having to kind of take CHEF with us.
Patrick Raymond: And it was also interesting because it was a forcing function to move everybody to it. So as we migrated team by team, we also containerized all of our applications at the same time.
Sasha Kipervarg: And let, let me take that point in a slightly different way. There’s something called the fallacy of sunk costs, which is this idea that you continue to invest in something even though it’s not working, even though you’ve made this huge past investment and a lot of teams fall into it. In fact, we did as well. One of the areas where this happened, where we had to make a hard decision was on what cloud we would use. When we first started this idea – the migration – we were mostly in AWS actually. And when I say mostly, I mean we just had a very limited number of products in AWS and we think they’re a great cloud provider as well. But for the things that we wanted to do, for the costs that we were looking for, it ultimately turned out that GCP was the right thing for us.
Sasha Kipervarg: But that was a train that was going down the track at about 200 miles an hour, and about two months before we signed the deal, we paused for a moment and said: “Is this still the right thing?” And then all the tech leads got together and we discussed and we went off into our separate corners, reevaluated the data because the data had been coming in continuously and said: “You know what, actually DCP is the right thing for us.” And I’m sure that for many of you who are making decisions, you’re probably in this sort of process as well where you’ve made a huge upfront investment in something, whatever that is, whether it’s VMs or a cloud provider and you’re trying to figure out whether you should continue or take that risk. And I would encourage you to take that risk.
Kelsey Hightower: I see a lot of new logos on the other side of that screen. What was the time delta between getting expertise required to really say we’re going to do this?
Joshua Kwan: It’s all a blur now. But I think one thing that helped us is that I … I’m … I didn’t originally join LiveRamp. I joined a startup called Arbor Technologies that was then acquired by LiveRamp. And what was useful about that was that Arbor was cloud … was already Google Compute or Google Cloud Platform native. So we were already using Kubernetes engine. We were already using Docker and already using cloud storage. So I think that I was able to, in conjunction with Sasha and Patrick, I was able to take that expertise and then start kind of getting on a pedestal and saying okay, here’s what everyone ought to do. Learn from the mistakes that we had at Arbor. So for me, I think it was a very fortunate thing. I don’t know if I have a generic answer that will help with this.
Kelsey Hightower: So I mean a lot of people have experience where there is a team that has expertise. I meet a lot of enterprises, they have this cloud center of excellence. Everyone … Anyone ever heard of those? And so you try to isolate a lot of the innovation to a small group of people. Did you find it challenging trying to scale out the expertise across the rest of LiveRamp?
Joshua Kwan: Absolutely. So-
Kelsey Hightower: and how did you do it, specifically?
Joshua Kwan: I got on stage a lot at our developer meeting and just said, here, during this hackweek we’re going to learn about Kubernetes and the first person to get a container up and running where I can get a 200 from it gets a prize or whatever. And there was a bit of that. So there was a kind of a gamification. There was also just kind of sitting with individual teams and letting them express their concerns with the new world and then kind of tailoring our approach to, okay, you need to learn about this. So let’s do one session about that. And then of course we would iterate on that and improve it for the next time, the next team that was migrating. I’d like to let Patrick talk more about that part.
Patrick Raymond: And from a tactical perspective, the infrastructure team became; we’ve heard them referred to as tiger teams elsewhere, but basically the infrastructure teams split up and went and worked directly with scrum directly with other teams for extended periods of time until the expertise was there; because we did not only move to these new technologies, but we also shifted our approach to teams owning their own infrastructure, which was not a concept that we had too much beyond the infrastructure team prior to this. So there were a lot of changes and it required direct … directly working with all the teams, scrumming with them, working with them on a daily basis for about 14 months.
Kelsey Hightower: So you put this master plan together, right? You’re looking and you’re standing back admiring your work, and then it’s time to go. Any challenges in that process from you’re in your data center, now you’re moving things over, anything that comes up unexpected?
Sasha Kipervarg: So the biggest thing that came up was, we were part of this company called Axiom and then it was announced that we would be independent and prior to then we had someone manage our data center for us. They would rack and stack, they would deal with the storage devices, the network devices and we would just mostly deal with the V-center APIs. And all of a sudden we had no idea what the hell to do because we were mid migration. And so we found a company to sort of take on that work for us and I wanted to focus our team on what was our future and not our past, and investing in or continuing to invest in our data center by hiring personnel even for a short term, didn’t make any sense at all. I think there were probably other really good technical and execution examples of this shift during the migration too.
Joshua Kwan: One interesting thing was that we weren’t even sure if we could just get data out of the data center fast enough. For example, we have a data pipeline, right? So moved from left to right and we want it to send data out to GCP and complete the data pipeline from there. But we were actually kind of not certain if we could blast data out of our colo fast enough in order to get it over the … over to GCP in time to meet our SLAs and stuff like that. So one thing that was really interesting was – just how long was the build out process of the interconnect?
Patrick Raymond: It was four months.
Joshua Kwan: Four months of working with vendors and stuff like that. And just I have no idea if this is actually going to work. They brought all this expensive fiber stuff to our data center and we were just like, okay, well we’re going to flip it on. And you tell us whether it’s fast enough.
Kelsey Hightower: So for those ones unfamiliar, a typical approach to the situation is you want to have your networking as integrated as possible with the cloud provider. So the concept of direct connect where you basically run a line as close as you can get to the nearest peering point and now you kind of have your own private pipe. I guess after doing that, was there a cost savings to doing that? So I’m pretty sure you got some performance, maybe lower latency. Is there any cost advantage of doing kind of just direct connecting to GCP?
Patrick Raymond: Not for us because we had a very limited pipe out of our data center. So we were actually looking at potentially investing another million and a half dollars to expand the pipe so that we could move fast enough. So it was actually one of the biggest limitations that we had throughout the entire migration. Definitely not any cost savings for that.
Kelsey Hightower: All right. So before we go to the audience, one more question we have here is, what advice do you have for anyone else looking to make this migration – whether it’s from another cloud provider – or having this on prem to cloud for the very first time discussion? We’ll start with you.
Sasha Kipervarg: So of the coolest things I saw during the migration were teams owning themselves – the process of them becoming teams by facing a challenge that they had never faced before and working their way through it. It’s like trial by fire, and was a beautiful thing. And I think there are some teams at LiveRamp that know that they were created within that moment when they had to get the migration done, they had to work over the weekend. They faced a challenge that they never faced before and they overcame it. And don’t discount that thing. The trial by fire is a beautiful way of creating shared pain and empathy and getting things done. I see a gentleman laughing here. He’s probably experienced that.
Joshua Kwan: From my side. I think it’s a bit of a rehash of what you said earlier about the fallacy of sunk costs, but I really … this experience taught me that you’re going to make a lot of directional mistakes constantly throughout a process, right? Because it’s … this is a process that’s full of unknown unknowns. And so, the best thing you can do most of the time is just put your best straw man forward and be like, okay, we’re going to try this and if it doesn’t work then we’re going to kind of adapt course and repair it. And we did this so many times with all our engineering teams. So thank you. I mean, if there’s any LiveRamp engineers here, thank you for your patience because we were like, do X. Wait, sorry, do Y. And that happened five or six times during the ramp up of our kind of consultation tacked with teams to get them migrated to GCP.
Joshua Kwan: And of course near the end we had it pretty much down. But don’t be afraid to just say you made a mistake and say how you’re moving forward and how that’s going to help everyone.
Kelsey Hightower: I have a question there to pull on that thread a bit. How much executive or leadership cover was provided to do this? I can imagine someone in the room like, this ain’t going well. Abandon ship, stay where you are.
Joshua Kwan: I was scared of that I guess at a few moments; scared of extending our data center bill and stuff like that. I feel that, I want … I would like Sasha to talk about this, but I felt very much like as long as I knew in my heart of hearts I was doing the right thing to get the migration over the hump in time that everything would be okay. Maybe you can talk about that a little more.
Sasha Kipervarg: I mean it took a lot of alignment. It’s a large company. Any large company has a lot of different people that have a lot of different perspectives and have a lot of different goals and you want to align all of those. And I think prior to the execution of the migration, we aligned all the executives on our schedule, what the plan was, what rationale was, the numbers, and I think that helped a lot. And when there were cases where there was some dissonance about how we should proceed because people had slightly different goals and objectives that clash with delivering product. We were able to get the executives into a room and get the support that we needed.
Sasha Kipervarg: There was, going backwards to the previous question I think there’s one really important thing I wanted to tell you all when you’re doing your migration that will help you, and it is this: it is that you’re going to have budget overruns because you’re essentially taking the decision to spend money and you’re moving it from central finance to the very edge, to the developer’s fingertips and they can spin up 10000 instances and start billing … Kelsey’s smiling because that’s how –
Kelsey Hightower: We really appreciate you allowing your developers to spend money with us.
Sasha Kipervarg: But you’re making that decision without any, potentially, any governance in place without helping the developers understand what they’re spending, what they’re spending it on, what their budget should be. And I would encourage all of you to spend a lot of time thinking through that during your migration, before the migration, after, just keep a really close eye on it because if you don’t, your CFO will knock on your door just like ours did.
Kelsey Hightower: The finance team is here, like hell yeah. All right. Hopefully we’re going to leave a bit of time for … there’s 10 minutes plus left. So hopefully there’s some questions from the audience and then we want to kind of give you an authentic view into maybe the world of things that we didn’t cover. There has to be questions.

 

Audience Q&A:

 

Audience Question: I have a question for you.
Kelsey Hightower: All right.
Audience Question: So when you migrated to GCS, did you also have to migrate how you process the data downstream because in HTFS it implies [inaudible 00:19:39] or cloud or something. And then GCS what tools were you using to process the data and was that even part of your calculus here?
Kelsey Hightower: Summarize the question and then answer it, please.
Joshua Kwan: So the question was when we migrated between HTFS and cloud … and Google cloud storage, what techniques changed? What was … how did the data processing change? So full disclaimer, I’m not part of the team that focused on the Hadoop parts, so I may mangle this answer, but I would say that from my understanding because there’s a cloud storage connector that sort of implements the HTFS interface in Java so there was some seamlessness to it. But then in, I think most of the operational changes happened in terms of data resilience and retention times where we were able to … instead of implementing cleanup policies on HTFS, we were able to set up life cycles on Google cloud storage saying, hey, delete this data after X days or after a certain number of versions crop up. Does that answer your question?
Audience Question: What … was it … Did it end up being the same data processing cluster when you said it was seamless?
Joshua Kwan: Did it end up being the same data processing cluster? So we used the same Cloudera product. I think we changed from Cloudera, oh gosh, what to Cloudera Director?
Patrick Raymond: I don’t remember the original one that was-
Joshua Kwan: But now we’re on Cloudera Director and Cloudera Director instead of spinning up or managing hardware, it would spin up compute instances for us and ultimately we’re still paying Cloudera for … to kind of orchestrate our Hadoop.
Sasha Kipervarg: And I should note that we were using beyond Cloudera Director, we are exploring Dataproc as well as an alternative. And the principal challenge there is really around cost. We process so much data, our clusters are so large and they’re long lasting that we’re still figuring out whether Dataproc is a good mix for us.
Kelsey Hightower: Here we go.
Audience Question: So you also talk about massive amounts of data and moving over to a new processing stack and hosting. How much of it was … one of the big challenges is usually in trying to move the mountain of data to a new … a whole new infrastructure. How much of it was just new incoming data started to fill the new lake and you had this kind of legacy versus actually moving the data? In other words, is there something that was just kind of aged out and eventually that kind of left, or did you actually move tera… petabytes of data from one place to the next?
Patrick Raymond: So, and I’m not an engineer so I may mangle this one, but I’ll do my best. We basically, when we started we had to figure out, first of all how do we move this entire stack and we wanted to avoid having huge networking costs by sending everything in and then sending everything back out. So we actually moved everything backwards, started at the end of the pipe, and then moved forwards. Throughout that process because our pipe was so small, we moved that mountain slowly over the course of six months – for the data that had to stick around – and the things we didn’t need to persist, we just got rid of.
Patrick Raymond: We were pretty ruthless about that and got rid of a significant amount of data and tightened up our life cycles on the data that we had. Does that answer the question?
Kelsey Hightower: I want to pull into that one because there’s another concept around, in the data center, once you fill up all the disc. Backups become a thing that either you just don’t do, become too expensive, and some of the same concerns come in. Could you all pull in the thread around, did it open up some new capabilities in terms of reliability, replication and backup?
Joshua Kwan: Absolutely. So, I think the easiest way to summarize it is that decisions that were constrained by gear or hardware or networking capacity, were now just dollar amounts – for the most part, right? We’ve … we have run into Google product limits, but for the most part, right, anytime that we want to say, okay, we need to hold on to a little more data, well, then we’ll just pay more for it. And I’ll give you a good example. We have this monolithic database server and a lot of products rely on it. So it’s hard to do maintenance on it. So if we ran out of space on that, we’d have to simply go tell people: “Hey, can you clear some space on the database because we’re not going to take it offline to upgrade the hard drives.” Now I can just go into the computer instance and resize the persistent disc – boom, done.
Sasha Kipervarg: And along with that decision comes knowing what you’re spending on that resource as well. And so Patrick and team have spent a lot of time getting teams to tag their resources so that we can present each team a list of, well, here’s what you’re spending money on, and if you want to increase the disk size on the database; go ahead and do it. But do it knowing what it’s going to cost. And then I wanted to mention that that was a great question that you asked before. And it was a great question because it was one of the key things that we didn’t know before the migration.
Sasha Kipervarg: We actually weren’t sure that it was going to be possible because we didn’t know how to structure that pipeline and how to synchronize the data and that we weren’t even sure Google could give us a cluster in a particular data center that large. We’ve spent many months figuring out what we just described.
Kelsey Hightower: I would say one thing to keep in mind too; I see a lot of customers struggle with this. The role you probably want to identify, but when you think about this move is someone who can actually analyze the spend, right? Looking around the resources, putting labels on there because the billing data and the APIs around it, some are more advanced customers dunk … drop their billing data into something Bit Query or their existing BI tools just have that role ready. If it’s someone from the finance team, they can help a lot by coordinating with the operations team because it’s very easy to make a decision based on what you’re seeing with the live billing data.
Kelsey Hightower: You can say: “Hey, that’s too big. Can someone justify it? And they say, “Oh we forgot to turn it off.” So it’s something you’d probably want to do more active than quarterly or doing your normal hardware refresh cycle.
Patrick Raymond: And if I could just pull on that thread a bit; that’s a role that we’re now spinning up after learning about this through pain. But it’s definitely something as Kelsey mentioned, you should identify very early on. Having that person whose technical and understands finance and who can partner directly with engineering teams to help make architectural decisions based around costs that are also good engineering solutions is a really critical role.
Kelsey Hightower: All right, we’ve got time for a couple more questions. One here.
Audience Question: I’m going to break your heart, sorry.
Kelsey Hightower: Go ahead.
Audience Question: Why did you guys pick GCM instead Azure or AWS?
Sasha Kipervarg: So let me answer that in a limited fashion. I don’t want to get too far into it. So at the … so the question was; why we picked GCP over Azure or AWS.
Kelsey Hightower: Because it was the right thing to do.
Sasha Kipervarg: Because of Kelsey, that’s why. So we looked at all those solutions, we were using AWS in a sort of a limited fashion with some products but we had expertise that understood GCP really well from Arbor. AWS was certainly in the running and we continue to use AWS. We use them in China, we have acquisitions that we make and they’re in AWS, and we think they’ll probably be there for a while – until Google develops some of their products in a more holistic way. So the answer is we’re … we are a multi-cloud. Now, the reason we didn’t pick Azure was when we looked at Azure and their use of containers and Kubernetes, they seem pretty far behind Google at the time.
Sasha Kipervarg: And we knew that besides Dataproc that containers and containerization was going to be a dominant strategy for us; a technical strategy. And so it just, it didn’t make sense at the time. Now where they are today, I don’t know. I haven’t taken a look. I would encourage you all to look at all the vendors to make a good decision for yourself.
Joshua Kwan: If I could jump in there, I think that one of the things that pushed GCP over the edge for the kind of engineering level was sort of the … our ability to interface directly with engineers. The number of hops to reach engineer or number of hops to reach technical person at least was very low compared to AWS. We had this stable of quote unquote, sales engineers at AWS who weren’t really technical at all. And that was really frustrating for us. And then when we were asked to consider the change, we started talking to people-  talking to people like Kelsey – and we were blown away, right? We were like, we have access to people who think the way we do and ultimately want us to succeed and it can help us technically on a very intimate level.
Kelsey Hightower: So to pull on a thread a bit more. So one big decision we see is around the open source bits like Kubernetes and I use all of the other cloud providers just to stay authentic, to be connected and tuned into the same decision framework that customers are making and you can just kind of fill it with your own hands when you click the GKE button and it comes up and you have a level of confidence that Kubernetes offering Google is all in. And then we do try to make our engineering teams super available. I was a contributor to Kubernetes for a long while. I studied the project. I know a lot of the use cases – I’ve been a user of it. And we tend to just make our core engineering teams available to our customers directly.
Kelsey Hightower: We whiteboard with them, we co-design with them and we just see that as a natural way of Googlers working. That’s how we work with each other internally and that’s how we choose to interact with our customers on GCP as well.
Sasha Kipervarg: The other thing I wanted to note was we want to be where our customers want us to be. So we have customers who want to pick up their data in an AWS bucket and if it’s a product requirement then we’re there too. And I think the way to think about it is not as a technical decision necessarily, but a business first decision.
Kelsey Hightower: We’ll do one more question. We’ll go here.
Audience Question: So congrats on the migration. What’s next for you guys? What are your next goals?
Sasha Kipervarg: So I’ll speak about it from my altitude and then maybe the other guys can fill in. So the principal challenges we have at the moment is the FinOps challenge, right? We have this budget that went over. We’ve been spending the last eight months sort of doing a lot of different things to reign it back in. One of them is to sort of pick on low hanging fruit; data that doesn’t need to be there, basic re-architectures… we’ve had some teams … like one team in particular, our online identity team did a full re-architecture and they reduced their costs by 75% which is amazing.
Sasha Kipervarg: We are establishing a training program for our new budget owners who are basically engineering directors. We’re establishing training programs for our engineers so that they know what it means to be in control of spend as well. And we expect all that stuff to pay off over the next year.
Joshua Kwan: To jump on it from the engineering side, I would say that we’re, obviously the past months were all focused on just getting over the hump, right? And we made probably some weird decisions or created some technical debt to get us over the edge. I think fixing that is top priority for us, particularly in the realm of access control and governance and security and getting everything that was created manually in cloud platform into Terraform. Just to make sure we have a record of everything that’s been created. So in case, we have a fat finger, we can always get one step closer to creating it from source code.
Patrick Raymond: And then maybe a final thought is that in along the thread that Josh is speaking about, we’re trying to get the SRE practices across all of the teams. And so that’s a multi-month, year long effort to work with each team and make sure that they’re able to own their own stacks and the reliability of those stacks as well.
Kelsey Hightower: Awesome. That’s our time. Thank you all for attending the session with us.

Interested in more content from RampUp?

Clicking on the links below (to be posted on an ongoing basis) will take you to the individual posts for each of the sessions where you can watch videos, read the full transcript of each session, as well as download the slides presented.

RampUp for Developers’ inaugural run was a great success and was well attended by a variety of attendees. Many interactions and open discussions were spurred from the conference tracks and discussions, and we are looking forward to making a greater impact with engineers and developers at future events, including during our RampUp on the Road series (which take place throughout the year virtually and at a variety of locations), as well during next year’s RampUp 2021 in San Francisco. If you are interested in more information or would like to get involved as a sponsor or speaker at a future event, please reach out to Randall Grilli, Tech Evangelist at LiveRamp, by email: [email protected].