Data Virtualization abstraction levels and data platform envisionedPublished on: Author: Hasso Schaap Category: Oracle
This year’s Oracle conference, OpenWorld 2019, and CodeOne, had some nice insights for the industry. These days, we all talk about the importance of data, derive value and set up data-driven initiatives. But it’s difficult to grow old with ancient (architecture) principles that are tied to and embedded in existing infrastructure and business flow, and move those to the cloud.
Some of the most important innovations in the combined effort of Oracle and Microsoft to ease the transition to their cloud datacenter solution, are interconnection and security integration. They are also two of the most important pillars of the cloud.
When designing data-driven solutions, the combined benefit of a popular app developer cloud like Azure, and a great data cloud like Oracle, shows that the cloud in general really brings new opportunities.
Data Virtualization in a Data Architecture, session DEV2293
This year, I discussed the topic of Data Virtualization and the many forms it may take in my Developer session (http://bit.ly/datavirt). During the session, I highlighted a few vendors, some of which were also exhibiting at The Exchange. To summarize my session: don’t just blindly buy a popular standalone general-purpose Data Virtualization tool. You can already do more with existing architectures or software licenses like Oracle Database. Regarding performance and security, I will always prefer Data Virtualization in the most powerful layer first, the Data Platform, instead of middleware.
Data Virtualization is also becoming an important part of the Data Platform, envisioned by Oracle. To build an Autonomous Data Platform, we will be needing tightly integrated clouds, including 3rd parties, but most of all a Cloud Native Framework built with the CNCF community in mind.
A next generation Platform will offer a pre-integrated environment, not a bunch of services, where you can decide to start using connectivity with all kinds of first- and third-party solutions without having to pay for a set of resources upfront. Pay for what you use can be based on transactions, payload, queries, Megabytes, cpu-time, memory usage or whatever combination seems appropriate for each (micro)service.
Connectivity allows not only for Data Virtualization, but also integration, replication, migration, streaming, analytics and all other functions of a Data Platform. This connectivity is what constitutes the foundation of a Data Catalog in the end: your starting point for all data related activities. This connectivity also allows for integrated security. Meaning, it needs less copying, less repeating and ultimately less service configuration and maintenance.
Beyond CodeOne 2019 for Data & Analytics
For me, this year’s conference was about opening the lid on Open Source, but I also believe only a handful of Open Source software projects can survive. We’ll have to closely watch the market to see what will come next, after we all used Spark and Kafka and a few other Data & Analytics related Open Source projects. I saw a few interesting new projects.
Here at Qualogy, we have our own initiative: The Horizon Guild. Horizon Guild allows us to discover and analyze these tech trends and match them to the local market in the Netherlands. It’s a bright future for us, with Oracle and Microsoft both supporting open source, open standards and open connectivity.
Our next encounter, you may find me discussing data virtualization abstraction layers and methods that are available, and why these are an important concept in your enterprise data architecture.