Skip to main content

Misusage of the new shiny package: A nerdy drink tracker for your next party

Currently a lot of people are talking about the new shiny package. So I got curious and built an own, more or less useful app: A drink tracker


This app can be used to track how much someone drank and therefore it is very useful for every party, especially when you plan to play some drinking games.


The usage is very simple:

  1. Start an R session
  2. Run the following script (uncomment to install the packages) and change the names in the persons vector
  3. Game should start in your default browser
  4. Every person who had a shot / a sip beer / whatever can be chosen from the drop-down  list. If the same person has to drink again simply push "Drink again!". You can switch between the timeline and the leaderboard by clicking on the tabs
  5. Have fun!


Comments

Popular posts from this blog

Explaining the decisions of machine learning algorithms

Being both statistician and machine learning practitioner, I have always been interested in combining the predictive power of (black box) machine learning algorithms and the interpretability of statistical models.

I thought the only way to combine predictive power and interpretability is by using methods that are somewhat in the middle between 'easy to understand' and 'flexible enough', like decision trees or the RuleFit algorithm or, additionally, by using techniques like partial dependency plots to understand the influence of single features. Then I read the paper "Why Should I Trust You" Explaining the Predictions of Any Classifier [1], which offers a really decent alternative for explaining decisions made by black boxes.


What is LIME? The authors propose LIME, an algorithm for Local Interpretable Model-agnostic Explanations. LIME can explain why a black box algorithm assigned a specific classification/prediction to one datapoint (image/text/tabular data) b…

Statistical modeling: two ways to see the world.

This a machine-learning-vs-traditional-statistics kind of blog post inspired by Leo Breiman's "Statistical Modeling: The Two Cultures". If you're like: "I had enough of this machine learning vs. statistics discussion,  BUT I would love to see beautiful beamer-slides with an awesome font.", then jump to the bottom of the post and for my slides on this subject plus source code.

I prepared presentation slides about the paper for a university course. Leo Breiman basically argued, that there are two cultures of statistical modeling:
Data modeling culture: You assume to know the underlying data-generating process and model your data accordingly. For example if you choose to model your data with a linear regression model you assume that the outcome y is normally distributed given the covariates x. This is a typical procedure in traditional statistics. Algorithmic modeling culture:  You treat the true data-generating process as unkown and try to find a model that is…