close
close

first Drop

Com TW NOw News 2024

The Big Questions Shaping AI Today
news

The Big Questions Shaping AI Today

Feeling inspired to write your first TDS post? We’re always open to contributions from new authors.

The constant flow of model releases, new tools, and cutting-edge research can make it difficult to pause for a few minutes and reflect on AI’s big picture. What are the questions that practitioners are trying to answer—or, at least, need to be aware of? What does all the innovation actually mean for the people who work in data science and machine learning, and for the communities and societies that these evolving technologies stand to shape for years to come?

Our lineup of standout articles this week tackle these questions from multiple angles—from the business models supporting (and sometimes generating) the buzz behind AI to the core goals that models can and cannot achieve. Ready for some thought-provoking discussions? Let’s dive in.

  • The Economics of Generative AI
    “What should we be expecting, and what’s just hype? What’s the difference between the promise of this technology and the practical reality?” Stephanie Kirmer’s latest article takes a direct, uncompromising look at the business case for AI products—a timely exploration, given the increasing pessimism (in some circles, at least) about the industry’s near-future prospects.
  • The LLM Triangle Principles to Architect Reliable AI Apps
    Even if we set aside the economics of AI-powered products, we still need to grapple with the process of actually building them. Almog Baku’s recent articles aim to add structure and clarity into an ecosystem that can often feel chaotic; taking a cue from software developers, his latest contribution focuses on the core product-design principles practitioners should adhere to when building AI apps.

The Big Questions Shaping AI TodayPhoto by Teagan Ferraby on Unsplash

  • What Does the Transformer Architecture Tell Us?
    Conversations about AI tend to revolve around usefulness, efficiency, and scale. Stephanie Shen’s latest article zooms in on the inner workings of the transformer architecture to open up a very different line of inquiry: the insights we might gain about human cognition and the human brain by better understanding the complex mathematical operations within AI systems.
  • Why Machine Learning Is Not Made for Causal Estimation
    With the arrival of any groundbreaking technology, it’s crucial to understand not just what it can accomplish, but also what it cannot. Quentin Gallea, PhD highlights the importance of this distinction in his primer on predictive and causal inference, where he unpacks the reasons why models have become so good at the former while they still struggle with the latter.

Looking for other questions to explore this week—whether big, mid-sized, or extremely focused? We invite you to explore some of our other recent standouts.

  • Comprehensive and actionable, Sachin Khandewal’s debut TDS article presents a novel RAG approach that integrates complex reasoning for improved output quality.
  • Natural language processing meets The Office in Maria Mouschoutzi, PhD’s accessible tutorial, which conducts sentiment analysis on characters’ lines as a way to better understand the potential of this technique (as well as its limitations).
  • “Wouldn’t it be nice to have an approach that not only clustered the data but also provided innate profiles of each cluster?” Nakul Upadhya shares a beginner-friendly introduction to interpretable clustering.
  • In his latest math-focused deep dive, Reza Bagheri provides a detailed, expertly illustrated breakdown of decision trees and gradient boosting, how they work, and how we can implement the latter from scratch in Python.
  • If you’d like to enter data science but don’t have the credentials that typically lead to competitive roles, Mandy Liu’s new post offers all the inspiration—and actionable advice—you’ll need to set your career on the right path.
  • How do neural networks perceive categoricals and their hierarchies? Valerie Carey continues to explore high-cardinality categorical features and the intricacies of working with them.
  • Interested in solving complex optimization problems? Don’t miss Will Fuks’s engaging walkthrough of a recent project that leveraged linear programming to streamline a container-based supply-chain operation on a global scale.
  • For those of you who prefer to approach ML models from a product perspective, we strongly recommend Julia Winn’s excellent primer on evals and their potential impact on user experience.

Thank you for supporting the work of our authors! We love publishing articles from new authors, so if you’ve recently written an interesting project walkthrough, tutorial, or theoretical reflection on any of our core topics, don’t hesitate to share it with us.

Until the next Variable,

TDS Team


The Big Questions Shaping AI Today was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.