To help you exploit BigQuery’s awesome support for Nested JSON, we have added support to load data to BigQuery's nested columns on Hevo. This will allow you to load the data as per your use case to your BigQuery Data Warehouse.
Support for Nested Columns in BigQuery
Improvements and Fixes
Ability to Edit Data Type while Editing Schema Mapper Mapping
To provide more flexibility while editing the mapped schema, we have now added the capability to choose the data type each time a new column is added.
Kafka as a Source
We are thrilled to tell you that in addition to the existing data sources, you will now be able to load data from Kafka into your destination data warehouse in real-time.
With this, you will be able to analyse all your streaming data at lightning-fast speeds and drive key decisions in real-time.
Ability to Cancel a Model in the Running State
In order to help you deal with queries that are stuck or are running from long duration, we have added a feature to kill your query from within Hevo.
With this newfound power, you can easily manage stranded/troublesome data models from within Hevo.
Support to Load Historical Data from Tables with Composite Primary Keys
Improvements and Fixes
We have added support to load historical data from tables that have composite primary keys for relational database sources like MySQL, PostgreSQL, MSSQL, etc.
With this latest update, you can now load all of the historical data from all your tables seamlessly.
MSSQL as a Destination
We are excited to tell you that in addition to the existing destinations, you will now be able to load data into MSSQL, Microsoft’s own Relational Database Management System. Not just that, you would also be able to create data models and materialized views on your data in MSSQL.
Cut Down on Notifications and Alerts
In order to help you stay on top of only the important things that matter to you, we have toned down the number of notifications and alerts that you receive from Hevo. Based on feedback from many of you, we have eliminated many redundant notifications and cleaned up the alerts. We hope this change enhances your experience with Hevo.
Load Status for Redshift, Snowflake, BigQuery
To help you get complete visibility into the events loaded into your warehouse, we have added a “Load Status” tab on Hevo Pipeline Page. This will help you see exactly how many events have been successfully loaded into the destination tables.
Welcome OracleDB as a Source
Yes! We have added OracleDB to the pool of data sources supported by Hevo. You can choose to bring OracleDB data via custom SQL query, Redo Log or by simply loading the entire table.
You can now analyse OracleDB data in real-time within your data warehouse in just a few clicks!
Unique Column querier for all JDBC sources
To help you bring data more flexibly, Hevo now supports incremental data load via a unique incrementing column of any data type. The historic constraint on unique numeric key for append-only mode will be discontinued. The updated constraints will be implemented in its place.