Pipeline Prioritization

Hevo can now prioritize the data ingestion into the destination as per your needs. This feature will allow you to define which pipeline is important for you, prioritise ingestion on this pipeline and ensure that this data is available at your destination on-time.

All you need to do is let Hevo know what to prioritize by clicking on the “priority” key next to the pipeline number.

priority.png

A Direct Way to Share Feedback

To help you voice out the good, the bad and the ugly, we have added a Feedback icon on the app. This is an open forum to speak your mind out, request for new features, highlight improvements and more.

We will watch out for your comments here.

feedback.png

Revamped User Experience

Keeping the betterment of your experience in mind, the Hevo dashboard has been re-designed for improved usability. Some of the key changes include - A dark layout that strongly brings out the different elements on the Hevo UI, a sticky global search bar, a rearranged navigation bar and more.

Do share your thoughts on Hevo’s new look.

Table and Column Name Sanitization in Destination

To aid the end users effectively query the data, previously Hevo would automatically sanitize the table and column names at the destination. Hevo will no more enforce name sanitization.

You can now choose to retain the same table and column names as mentioned in the source. This will give you the flexibility to define naming conventions as per your internal business standards.

Support for Nested Columns in BigQuery

To help you exploit BigQuery’s awesome support for Nested JSON, we have added support to load data to BigQuery's nested columns on Hevo. This will allow you to load the data as per your use case to your BigQuery Data Warehouse.

Ability to Edit Data Type while Editing Schema Mapper Mapping

To provide more flexibility while editing the mapped schema, we have now added the capability to choose the data type each time a new column is added.

Kafka as a Source

We are thrilled to tell you that in addition to the existing data sources, you will now be able to load data from Kafka into your destination data warehouse in real-time.

With this, you will be able to analyse all your streaming data at lightning-fast speeds and drive key decisions in real-time.

kafka.png

Ability to Cancel a Model in the Running State

In order to help you deal with queries that are stuck or are running from long duration, we have added a feature to kill your query from within Hevo.

With this newfound power, you can easily manage stranded/troublesome data models from within Hevo.

Support to Load Historical Data from Tables with Composite Primary Keys

We have added support to load historical data from tables that have composite primary keys for relational database sources like MySQL, PostgreSQL, MSSQL, etc.

With this latest update, you can now load all of the historical data from all your tables seamlessly.

MSSQL as a Destination

We are excited to tell you that in addition to the existing destinations, you will now be able to load data into MSSQL, Microsoft’s own Relational Database Management System. Not just that, you would also be able to create data models and materialized views on your data in MSSQL.