close
close

first Drop

Com TW NOw News 2024

Vision Transformers, Contrasting Learning, Causal Inference and Other Deep Dives You Can’t Miss
news

Vision Transformers, Contrasting Learning, Causal Inference and Other Deep Dives You Can’t Miss

Feeling inspired to write your first TDS post? We are always open to contributions from new authors.

As many of us enter the final stretch of summer, it’s time to take advantage of the quieter weeks before the typically hectic September and explore new topics in data science and machine learning.

To help all the learners and skill-builders among our readers, this week we’re presenting a special edition of The Variable, dedicated entirely to our best recent deep dives (and other articles that require a little more time and focus than usual). They may be longer to read, but they do a fantastic job of covering their respective topics with nuance, care, and an eye for practical application. We hope you enjoy our selection.

  • A practical guide to contrastive learning
    Contrastive learning is useful for learning underlying data representations without explicit labels. It also has numerous practical application examples. Mengliu Zhao guides us through the process of building a SimSiam model using the FashionMNIST dataset.
  • Paper walkthrough: Vision Transformer (ViT)
    We are always in the mood for a solid, in-depth paper analysis, and even more so when it deals with a groundbreaking concept like vision transformers. If you are new to the topic or want to expand your existing knowledge of ViT, don’t miss Muhammad Ardi’s debut article on TDS.
  • Accelerating the Vision Transformer with BatchNorm
    Let’s stick with the vision transformer for a moment: if you’re already familiar with it but could use some help making your workflows more efficient and streamlined, Anindya Dey, PhD provides a comprehensive guide to integrating batch normalization into an encoder-only transformer architecture, resulting in reduced training and inference time.
  • Improving Ecommerce with Generative AI – Part 1
    Some of the promised benefits of recently released AI tools have yet to materialize. Mina Ghashami presents a new series focusing on use cases where generative AI applications are already poised to make a real impact, starting with one of the most common (and mission-critical) tasks for ecommerce platforms: product recommendations.

Vision Transformers, Contrasting Learning, Causal Inference and Other Deep Dives You Can’t MissPhoto by Nellie Adamyan on Unsplash

  • Causal Inference with Python: A Guide to Propensity Score Matching
    Lukasz Szubelak brings theory and practice together and invites us to explore the ins and outs of causal inference in his in-depth patient study. He focuses on propensity score matching as a powerful technique for estimating treatment effects in non-randomized settings.
  • ChatGPT vs. Claude vs. Gemini for Data Analysis (Part 1)
    ML practitioners are faced with an increasingly difficult choice when deciding which LLM-powered products to choose. Yu Dong’s new series aims to bring clarity to a sometimes chaotic ecosystem by comparing the performance of three key offerings (ChatGPT, Claude, and Gemini) on critical data analysis tasks, in this case, writing SQL queries.
  • Omitted variable bias
    Reading Sachin Date’s explanation of math and statistics is always a highlight for us — and his latest, on “one of the most common and easily missed biases in regression studies,” is no exception. We invite you to explore his deep dive into the omitted variable bias, which also outlines several approaches to analyzing and estimating its effects.

Thank you for supporting the work of our authors! We love publishing articles from new authors, so if you have recently written an interesting project walkthrough, tutorial or theoretical reflection on one of our core topics, please do not hesitate to share it with us.

Until the next variable,

TDS Team


Vision Transformers, Contrastive Learning, Causal Inference, and Other In-Depth Studies You Can’t Miss was originally published in Towards Data Science on Medium. People continued the conversation by bookmarking and commenting on this story.