Just Enough Data Weekly Newsletter 13
Hey Everyone,
Apologies, I could not post the weekly newsletter last week. But I promise, not to miss it in the future and to keep you always posted with the Data Engineering news weekly going forward.
Do share your thoughts in the comment section if you have an feedback :)
How to Create a Machine Learning Framework
With only 1 in 10 models making their way into production, deploying models and generating business value is already a challenge that many organisations face. However as you scale your machine learning (ML) operation, this challenge catalyses, infrastructure can become unwieldy, and in turn, ML becomes less effective.
Event by Seldon
Online
Tue, Nov 30, 2021, 4:30 PM - 5:30 PM (your local time)
Data Observability Learning Summit 2021
At the first Data Observability Learning Summit, startup founders, senior executives, and engineering thought leaders shared challenges related to DataOps (Data Management, lineage, Data Quality, Timeliness), MLOps/AIOps (ML Monitoring, Trusted AI), and the need to better connect the two. We are pleased to share these recordings with the wider audience to share knowledge and build community in the space of Data Observability.
Data Virtualization seems promising. But does it scale with your data and BI needs?
Data engineers and data architects strive to provide data consumers with the data and analytics user experience they need. Data virtualization can look very promising — meet all my BI goals and not have to move data? Yes please!
However, as your data, user, and application scale grows, old and new problems arise that put you back where you started — or worse. Data lakes combined with Dremio can address BI needs at any scale.
Good to great series: Reverse ETL success stories for data teams
It’s no secret: Data teams are the heart and soul of modern, data-led businesses. They’re responsible for powering the company’s day-to-day operations with vital, fresh insights modeled for each specific use case their business teams need.
Introducing Analytics at Meta
The Facebook company is now Meta, and we couldn’t be more excited about the vision Mark laid out in his keynote at Connect. While our company name may have changed, our company mission remains the same: to give people the power to build community and connect the world. We are the Analytics Team at Meta, and we’re excited to play a role in seeing that mission realized.
Airflow 2.2.2, 2021-11-15 is out
Add a Docker Taskflow decorator (#15330, #18739)
Add Airflow Standalone command (#15826)
Display alert messages on dashboard from local settings (#18284)
Advanced Params using json-schema (#17100)
Ability to test connections from UI or API (#15795, #18750)
Add Next Run to UI (#17732)
Will Apache Arrow Flight SQL replace ODBC and JDBC for Analytics/BI workloads?
Most popular BI tools rely on ODBC or JDBC to bring data in from where it resides. For example, Microsoft Power BI relies mostly on ODBC, Tableau relies on both, Looker relies on JDBC, and Qlik relies on both depending on the product. This has worked well enough, mostly because both standards have been around for a long time and have inertia behind them. In many ways, however, these standards are tolerated rather than embraced by developers. There’s a reason ODBC has been referred to as Other Developers Buggy Code.
The Rise of Data Observability
Architecting the future of data trust
Lior Gavish, CTO and Co-Founder of Monte Carlo
What is data observability and how can you apply it to your data stack? Lior Gavish, co-founder of Monte Carlo and creator of the data observability category, discusses his vision for the future of end-to-end data trust at scale.
Evolution of the SQL language at Databricks: ANSI standard by default and easier migrations from data warehouses
by Bilal Aslam, Serge Rielau, Shant Hovsepian and Reynold Xin Posted in PLATFORM BLOGNovember 16, 2021
Today, we are excited to announce that Databricks SQL will use the ANSI standard SQL dialect by default. This follows the announcement earlier this month about Databricks SQL’s record-setting performance and marks a major milestone in our quest to support open standards. This blog post discusses how this update makes it easier to migrate your data warehousing workloads to Databricks lakehouse platform. Moreover, we are happy to announce improvements in our SQL s
At the end of the day you will see success. For some it would be days and for others it could be Years.
Ajith Shetty
Bigdata Engineer — Bigdata, Analytics, Cloud and Infrastructure.
Medium Subscribe✉️ ||More blogs📝||LinkedIn📊||Profile Page📚||Git Repo👓