Recently we worked with a customer that needed to run a very significant amount of models in a given day to satisfy internal and government regulated risk requirements. Several thousand model executions would need to be supported per hour. Total execution time was very important to this client. In the past the customer used thousands of servers to meet the demand. They need to run many derivations of this model with different economic factors to satisfy their requirements.
Technology-focused discussions about genomics usually highlight the huge growth in DNA sequencing since the beginning of the century, growth that has outpaced Moore’s law and resulted in the $1000 genome. However, future growth is projected to be even more dramatic. In the paper “Big Data: Astronomical or Genomical?”, the authors say it is estimated that “between 100 million and as many as 2 billion human genomes could be sequenced by 2025”,
As companies strive to implement modern solutions based on deep learning frameworks, there is a need to deploy it on existing hardware infrastructure in a scalable and distributed manner comes to the fore. Recognizing this need, Cloudera’s and Intel’s Big Data Technologies engineering teams jointly detail Intel’s BigDL Apache Spark deep learning library on the latest release of Cloudera’s Data Science Workbench. This collaborative effort allows customers to build new deep learning applications with BigDL Spark Library by leveraging their existing homogeneous compute capacity of Xeon servers running Cloudera’s Enterprise without having to invest in expensive GPU farms and bringing up parallel frameworks such as TensorFlow or Caffe.
Cloudera Data Science Workbench provides freedom for data scientists. It gives them the flexibility to work with their favorite libraries using isolated environments with a container for each project.
In JVM world such as Java or Scala, using your favorite packages on a Spark cluster is easy. Each application manages preferred packages using fat JARs, and it brings independent environments with the Spark cluster. Many data scientists prefer Python to Scala for data science,
Unmodified TPC-DS-based performance benchmark show Impala’s leadership compared to a traditional analytic database (Greenplum), especially for multi-user concurrent workloads. Additionally, benchmark continues to demonstrate significant performance gap between analytic databases and SQL-on-Hadoop engines like Hive LLAP, Spark SQL, and Presto.
The past year has been one of the biggest for Apache Impala (incubating). Not only has the team continued to work on ever-growing scale and stability, but a number of key capabilities have been rolled out that further solidifies Impala as the open standard for high-performance BI and SQL analytics.