Web Picks (week of 18 May 2020)

Every two weeks, we find the most interesting data science links from around the web and collect them in Data Science Briefings, the DataMiningApps newsletter. Subscribe now for free if you want to be the first to get up to speed on interesting resources.

  • Exploring Bayesian Optimization
    Breaking Bayesian Optimization into small, sizeable chunks.
  • Does your AI discriminate?
    “Hiring algorithms create a selection process that offers no transparency and is not monitored. Applicants struck from an application process – or as Ajunwa refers to it, “algorithmically blackballed” – have few legal protections.”
  • Deep Reinforcement Learning Works – Now What?
    “Two years ago, Alex Irpan wrote a post about why “Deep Reinforcement Learning Doesn’t Work Yet”. Since then, we have made huge algorithmic advances. Despite these advances, I argue that we, as a community, need to re-think several aspects.”
  • A visual explanation for regularization of linear models
    “The goal of this article is to explain how regularization behaves visually, dispelling some myths and answering important questions along the way.”
  • Measuring Fairness
    How do you make sure a model works equally well for different groups of people? It turns out that in many situations, this is harder than you might think.
  • Towards understanding glasses with graph neural networks
    “Glasses can be modelled as particles interacting via a short-range repulsive potential which essentially prevents particles from getting too close to each other. This potential is relational (only pairs of particles interact) and local (only nearby particles interact with each other), which suggests that a model that respects this local and relational structure should be effective. In other words, given the system is underpinned by a graph-like structure, we reasoned it would be best modeled by a graph structured network.”
  • Prediction is hard
    “Prediction, as the Danish proverb says, is hard, because we don’t have any data from the future. We can divide predictive models into three broad classes”
  • A Commit History of BERT and its Forks
    “I recently came across an interesting thread on Twitter discussing a hypothetical scenario where research papers are published on GitHub and subsequent papers are diffs over the original paper. Information overload has been a real problem in ML with so many new papers coming every month. This post is a fun experiment showcasing how the commit history could look like for the BERT paper and some of its subsequent variants.”
  • Openpilot, its model and driving in GTA
    Using the Openpilot model to drive a car in GTAV
  • Artbreeder
    Artbreeder aims to be a new type of creative tool that empowers users creativity by making it easier to collaborate and explore, based on GANs.
  • Hugging Face dives into machine translation with release of 1,000 models
    Hugging Face is taking its first step into machine translation this week with the release of more than 1,000 models.
  • Turing.jl
    A fast modeling and ML library for Julia.
  • Your Boss Is Watching You: Work-From-Home Boom Leads To More Surveillance
    “Employees were to install software called Hubstaff immediately on their personal computers so it could track their mouse movements and keyboard strokes, and record the webpages they visited.”
  • Can AI Become Conscious?
    Intelligence is about behavior. For example: what do you do in a new environment in order to survive? Consciousness is not about behavior; consciousness is about being.
  • Understanding uncertainty: Visualising probabilities
    Ian Short explores modern visualisation techniques and finds that the right picture really can be worth a thousand words.
  • This word does not exist