During Neo4j’s inaugural GraphSummit, we had the pleasure of hosting partners and customers to share insights and stories behind their connected data experiences. We will be featuring more of them in this series of blogs – so watch this space for more recaps. For the fifth presentation in the series, we’re sharing a highlight from the Melbourne stop of the tour. In this fireside chat with Rijutha Sivaprakas, a Technology Consultant for Ampion (a Wipro company), we will discuss her journey on how a leading Australian Bank is using data-driven insights to optimize their DevOps, as well as migrating to the cloud, using Neo4j.
Enjoy! And for more information, please write me at daniel.ng@neo4j.com.
Moderator: Please tell us about your role with Ampion.
Rijutha Sivaprakas: I work as the Lead Consultant within the digital engineering practice at Ampion. And what that means is that I work as a tech lead on data engineering projects for Ampion’s customers and clients .I started my journey in IT close to 12 years ago with Accenture as a Production Support Engineer. I moved on to doing monitoring framework and application development and then over to data engineering. I’ve been with Ampion for over five years now.
Moderator: What made you first think of graphs? Was there any particular driver within your business or your customer’s business?
Rijutha Sivaprakas: I work within the payments space of a large financial institution that’s Ampion’s customer. And I work in a team that specializes in optimizing platform delivery and monitoring frameworks using automation, tooling, and analytics to get the highest level of visibility we can on the payments processing performance, to improve the performance of systems that are involved in payments processing, and to ensure stability. Payments are essential to the functioning of the financial institution, and any sort of outage or service disruption to payment systems could mean a huge loss of revenue and trust with the bank’s customers.
Service outages are not completely avoidable, but what you can do is at least reduce the impact of such service disruptions. To do that, just like with every other large enterprise, what a bank needs is to keep up with technology trends and modernize its systems and applications. Especially since the introduction of real-time payments, it has become increasingly important that we do not extend or exacerbate the IT issues we have or introduce new ones when we are in the journey of modernization and embracing DevOps for increased stability and increased agility in that journey.
This is why this whole journey needs to be data-driven. It’s what led to the requirement of building a library of reference that would essentially give end users like business managers, tech owners, service owners, command center persona, and SRE engineers an understanding of how payments are flowing through the different systems. Even more importantly, it gives an understanding of what those different systems are, how they interact with each other, and how crucial they are for the entire payment processing.This will differ by payment type and what infrastructure hosts it. All of this ties back into the business services that are provided by the payment space.
We built that solution on an existing application, and when we started getting the real-time payments transaction data (how the payments are traversing through the systems and augmenting the business service and underlying host data that we have), it became hard to maintain and enhance this data because we kept getting more and more data sets added to it. This meant that users who were trying to search the data were not happy with the performance and results they were getting. Because this data is highly interconnected and very hierarchical, I was looking at the top three different types of data that we look at: small, wide, and deeply hierarchical/complex, many-to-many relationships. It is hard to manage all three types of data in our existing system and we came to realize that GraphDB was the answer to the problem.
Moderator: What I’m hearing is that the need for handling multiple relationships in these complex payment flows and making sure these real-time payments occur with no issues and can scale, answers the “why graph”. How did you find that Neo4j might be a potential answer?
Rijutha Sivaprakas: We looked at what was already available within the organization. There was an existing GraphDB that was being widely used. At the same time, we looked at what’s available in the market. After a paper-based assessment and research, the result of that paper-based assessment was that Neo4j met the performance and interoperability requirements that we were looking for and was a better solution. That’s why we went with Neo4j.
Moderator: How did you first engage with Neo4j?
Rijutha Sivaprakas: We first had to prove that this would actually meet our requirements. I used Neo4j Community Edition first in a non-production environment and made sure it had the same amount of data that we used in production. Then we did functional testing to make sure that it met our performance requirements and ticked all of our must-have requirements. After that, it was taken to the product owner within the delivery and monitoring optimization team. They endorsed it, and we could take it to production. It was pretty clear that this would meet our needs. The next step was to reach out to Neo4j and talk about how to productionize the solution.
Moderator: So coming from graphs to Neo4 and from trial to proof of concept, can you tell us a little more about who the stakeholders are and what success criteria you had to meet?
Rijutha Sivaprakas: The stakeholders were the product owners within the optimization team, the business owner, and the command center leads, who were quite heavily dependent on the existing solution and wanted it to be much better and more user-friendly than it was. The success criteria mainly was the fact that, with the old solution, there was a lot of manual effort and manual intervention involved, even in terms of changing the schema or making any updates to the data sets. With a new solution, the important thing is the ability to easily automate the end-to-end data ingestion pipeline and for end users to be able to have a seamless UI transition. The fact that we could integrate Neo4j with the existing UI interface was an important success criteria.
Moderator: Payments have massive systems– could you shed some light on how you went about and what you found in terms of interoperability within the existing ecosystem?
Rijutha Sivaprakas: It was very important that we could integrate Neo4j with a myriad of systems and data sources to allow us to pull data from wherever it was coming from. We found the Neo4j libraries, especially the Python driver, very helpful for these integrations. Using the driver, we could read data from Neo4j and plug that data into a UI component, which was essential. With the team heavily using automation and monitoring, it was important that we automate the deployment of Neo4j using these automation tools and also monitor them.
Moderator: That’s all good to hear. Could you give some examples of the business benefits of this new system and the team productivity?
Rijutha Sivaprakas: The main thing was definitely the automation of the data ingestion pipeline, which meant I essentially had to sit down and remove data points in the old solution. I mean I would literally sit down and change the schema manually in the data sets with the old solution. The fact that we were able to automate the whole data ingestion pipeline with little-to-no human intervention was a huge gain for us. In terms of the UI elements, the existing representations or visualizations of the data end users were using, when plugged into Neo4j, meant a huge reduction in the number of lines of code in terms of the queries running in the backend into Neo4j.
To give an example, there’s a panel within just one report with a hierarchical structure and about 45 lines of query in the old system, which was just reduced to one line of query. This means that it’s easier for newbies to come in and understand what that query is and what it’s doing or even to make changes. It’s much simpler for us to enhance and manage.
Moderator: Changes and DevOps are so important to making sure the platform is as smooth as can be. In terms of future use cases, are there any other ideas or suggestions that the business users have in mind? What use cases are you seeing?
Rijutha Sivaprakas: Similar to banks, every other large enterprise is moving their systems from on-prem to the cloud. This means identity and access management (IAM) changes all these access control rules. To be able to comply with the standards of the organization, when we migrate from the on-prem to the cloud, Neo4j can come into the picture and map out who has access to what, expectations, actualities, and use that to compare and come up with whether or not compliance checks in terms of identity and access management are met. That’s something irrespective of whether it is banking or telecom or whatever– everyone can use it. And especially in the space that I’m working in now, where we are moving off on-prem to the cloud, I can see this as a very good use case in my mind.
Moderator: Are there any other parts of the bank where you could see use cases?
Rijutha Sivaprakas: I worked with a financial crime team before, and there was a use case to build a decision-making system to be able to understand if a certain activity is suspicious or not. That involves looking at multiple data sets that would blend together and reveal if that activity is suspicious or not. Looking back to that time, Neo4j would have been a very good fit for that use case.