Revolutionizing Data Science: The Latest Trends in Automation, Experimentation, and Language Model Evaluation

Subscribe • Previous Issues

Data Exchange Podcast

1. Evaluating Language ModelsAs more general-purpose models are widely used, the need for tools to help developers pick models that fit their needs and understand the models’ limitations increases.  Percy Liang is Associate Professor of Computer Science and Statistics, and Director of the new Center for Research on Foundation Models at Stanford University.

2. Data Science In Context.  Google and Stanford’s Peter Norvig and MIT’s Alfred Spector are part of the team of authors behind the highly-acclaimed book, Data Science in Context: Foundations, Challenges, Opportunities. We discussed the state of data science, their analysis rubric, and trending topics in AI including looming regulations, synthetic data, and foundation models.

Data Science Analysis Rubric, (from “Data Science In Context”). Seven major considerations for determining data science’s applicability to a proposed solution

Experimentation and Optimization Tools for Data Science Teams

In the words of Norvig, Spector et.al. – “Data science is the study of extracting value from data – value in the form of insights and questions.”  In practice, industrial data science teams wear multiple hats and, depending on the company, are often responsible for reports (analytics and BI), models (including machine learning and AI), and experiments (designing and executing tests).

I recently poked around to see what researchers interested in data science have been focusing on, and I found pretty good alignment with what teams in industry are tackling. Data science is fertile ground for automation, and many early examples of automation target aspects of data science projects from modeling (autoML) to coding assistants that already generate decent SQL & pandas code. However, it should be noted that these automation tools are still in their early stages and further advancements are necessary in order to fully realize their potential. As previously noted, there are specific areas that require attention in order to achieve this goal.

[An analysis of academic & conference papers in data science surfaced these key areas. Data via Zeta Alpha.]

My analysis of recent academic and conference papers revealed a shortage of research in areas also neglected by entrepreneurs, specifically tools for experimentation and optimization Historically, experimentation platforms have been bespoke solutions, primarily found within technology companies. However, with the advent of modern data platforms, it has become increasingly feasible to build solutions that democratize and systematize experimentation. As a result, there are now a few startups attempting to fill this important gap in data science tooling.

Operations research (OR), a discipline focused on understanding systems and constructing and refining models to make informed decisions, is a crucial component of data science. OR boasts a wide range of applications, including the allocation of resources, scheduling, inventory management, logistics and supply chain optimization, network management, and others. In order to address optimization challenges, data science teams typically turn to proprietary solvers and simulation tools.

These existing tools work well, but what if you had access to an optimization tool that scales and can tackle more complex scenarios? One that fits with tools that data science and ML teams use (Python), and takes advantage of modern techniques (RL) to improve the performance of general purpose optimizers.

I’ve long wondered whether the open source framework Ray’s distributed computing capabilities and ability to integrate reinforcement learning models can enable data scientists to tackle increasingly complex optimization problems. I’ve played around with some of the search algorithms in Tune and I found some that are capable of solving some pretty interesting optimization problems (e.g. portfolio optimization in finance). Granted I’ve only explored toy examples and that an enterprise-grade optimization solution would necessitate the implementation of more efficient optimization algorithms and user-friendly interfaces and APIs catering to non-experts. But even with toy examples you get a sense that a flexible distributed computing framework like Ray might be a great substrate for next-generation optimization solutions.  The bottom line is that optimization tools seem due for a refresh, regardless of the framework you use. 

Entrepreneurs may find more opportunities for growth and success by focusing on delivering solutions for experimentation and optimization. The market for such solutions will expand as demand for cutting-edge technology to optimize business processes and decision intelligence capabilities increases. Startups that are able to successfully deliver innovative solutions that meet these needs may see significant growth in the coming years.


Recent job listings in the U.S. & Europe that mention Generative AI or Language Models reveal that adoption of these technologies remains in its nascent stages. Tech firms continue to dominate the job market, with a focus on the development of software tools and media applications utilizing these technologies. Click HERE to enlarge.

Spotlight

1. How Close is ChatGPT to Human Experts? Comparison Corpus, Evaluation, and Detection. ChatGPT has received attention for its ability to provide fluent and comprehensive answers to a wide range of human questions. In a recent study, researchers collected a dataset of responses from both human experts and ChatGPT and conducted evaluations to study the characteristics and differences between the two. They also explored methods for effectively detecting whether a certain text was generated by ChatGPT or humans.

2. Ray Achieves Record-Low Cost per TB with World’s Most Efficient Sorting Systemd.  This benchmark for system performance highlights that Ray has the capacity for scalability and performance required to execute even the most demanding distributed data processing tasks. Sorting, a benchmark known for its difficulty in stress-testing all system aspects, highlights the need for the elimination of bottlenecks in both hardware and software stacks, including the CPU, memory, disk, network, OS, filesystem, and runtime libraries.

3. Blind Face Restoration via Transformer-based prediction network.  The acquisition of facial imagery in uncontrolled environments commonly experiences degradation, including compression, blurring, and noise. This impressive demo showcases a new model for high-quality face restoration from damaged/low-resolution images.


If you enjoyed this newsletter please support our work by encouraging your friends and colleagues to subscribe:

%d bloggers like this: