Make the Most of Your Cloud Data Warehouse

Agile Data Engine is a DataOps Platform used for building, deploying, and running cloud data warehouses. It operates and orchestrates data loads from your preferred cloud storage to your target database.

DataOps tools for your Cloud Data Warehouse

Agile Data Engine's SaaS DataOps platform forms a solid foundation for your data warehouse so you can focus on building business value on top of it, rather than keeping it on life support.

Agile Data Engine is the only platform offering all of these straight out of the box:

Data Modeling & Transformations

Design data models and data load definitions such as transformations, business rules, and dependencies for the data platform.

Continuous Deployment

Out of the box continuous delivery workflow with built-in CI/CD pipelines with automatic schema changes.

Workflow Orchestration

Load your data to cloud database using metadata-based intelligent workflow generation, testing, orchestration, and monitoring.

API Connectivity

Easy-to-use metadata interfacing for operationalization and integration with existing systems and processes.

Monitoring & Testing

Workflow monitoring and timely data quality tests with integration possibility to customer backend systems.

INSIGHTS

Visibility and Insights on development and operations for data teams, including DataOps management KPIs and Data Product Catalogue.

Screenshot 2024-03-12 at 17.09.18
Düsseldorf-1-1

Agile Data Engine Architecture

Agile Data Engine's metadata-driven approach has three core concepts:

  • Entities - metadata object combining the data model and data loads into one. Information like keys and physical types are combined with permissions and load schedules.
  • Packages - collections of entities. A Package is also the unit of commit and deployment, flowing through the CI/CD pipeline.
  • Workflows - generated automatically based on logical data lineage and schedules. Entity loads are logical mappings between entities (and transformations), and the scheduling defines how often data is loaded.

Everything is designed in the same web-based user interface and an environment shared by all developers and teams working with your data warehouse.

The data models, load definitions, and other information about the data warehouse content are stored in a central metadata repository with related documentation.

The actual physical SQL code for models and loads is generated automatically based on the design metadata. Also, load workflows are generated dynamically using dependencies and schedule information provided by the developers.

More complex custom SQL transformations are placed as Load Steps as part of metadata, to gain from the overall automation and shared design experience.

Screenshot 2024-04-25 at 15.36.01
Düsseldorf-1-1

Data Products Developed as Packages

Screenshot 2024-04-25 at 15.39.52
Düsseldorf-1-1

Let's talk

Whether you're just getting started with a data transformation, are working to move from on-prem to cloud, or are curious to hear how we can help save you millions and grow your data warehouse's lifetime value, we're happy to chat!