Don’t look now but Apache Spark is about to turn 10 years old. The open source project began quietly at UC Berkeley in 2009 before emerging as an open source project in 2010. For the past five years, Spark has been on an absolute tear, becoming one of the most widely used technologies in big data and AI.

Whether Spark 3.0 focuses on deep learning has yet to be seen. There are a number of other improvements that are reportedly being considered, including better online serving of machine learning models, fully depreciating the RDD API, improvements to the Scala API, support for data formats (potentially Apache Arrow), better support for different processor types like GPUs and FPGAs, support for Neo4j‘s graph language Cypher, and making MLlib APIs type-safe.

Read more: