Panel: Intelligent Interactive Systems

Andreas Bulling, Albrecht Schmidt, Niels Henze,
& Passant El.Agroudy
Tue, 13.03.2018

In a panel discussion, we faced the questions: where can we in HCI cleverly apply machine learning? How can we choose an appropriate method? How can we design a UI fitting to our model’s performance? How to cope with an “imperfect” model while maintaining the best possible user experience?

Important research questions at the intersection of AI and HCI include: 1) Interactive Machine Learning; 2) Visual Analytics; and 3) Making Machine Learning Decisions Understandable (e.g., AlgorithmWatch is an initiative for that).

Take Away

  • What could you do?
    • think about methods, approaches how to use machine learning in HCI
      (not only in your project)
    • using machine learning approaches in HCI
  • What to explain for the user?
    • my optimization function vs input/output vs the whole black box
    • What is the necessary info for the user to make it understandable?
  • Designing for the imperfections of machine learning methods
    • design a *usable* system/UI, cope with errors
    • Again: make machine learning systems usable!
  • Making use of that there is cool machine learning stuff  but How good is that?
  • There is a lack of expertise in HCI (closing remark Andreas) ==> look into these topics!
    • understanding how it works
    • concepts are important to understand (not only for us as researchers, but only for reviewing and judging other papers, for the community etc)
  • Looking at variety of methods (closing remark Nils)
    • go beyond a simple classifier
    • Is “only” improving performance exiting?
    • Try new methods!
    • finding the “why” and a solution

Panel protocol

  • How can HCI benefit from AI
  • g. nearest neighbour things in databases for replacing “stupid” things by AI is a general goal
  • Where can we in HCI cleverly use machine learning?
  • participants report projects where they applied machine learning quite a lot already have experience / used machine learning in their projects!
  • Reasons for the other participants not using machine learning so far:
    • skills need to be invested / “learning” necessary
    • missing experience
    • interpretation is difficult and may be unclear
  • why not using “simple statistics”? à “intelligence”, robustness
  • on the other hand: lots of papers get in due to “fancy machine learning stuff” (though this is “only” separating classes …)
  • Andreas’ project: gaze estimation
    • learning-based / deep learning methods are mainly more robust! no handcrafting, no explicit geometry stuff

Sample for questions and answers

Q: sample size needed to start machine learning so that it makes sense? (Albrecht)
  • depends …. millions of images / samples à synthesizing samples (getting annotated data … for training) (Andreas)
  • on mobile devices / touch: quickly lots of data (Nils)
  • in autonomous … quickly lots of data (Albrecht)
  • data needs to be annotated (Andreas)
Q: Which Method to use for what?! (Andreas)
  • deep learning: needs training and lots of data …
  • SVM is useful in most cases, for classification as well as regression (predict continuous variables)
  • Random Forest: also does not require lots of training data
  • CNN’s
  • for time-series data: special models needed, e.g. Markov
How to choose a model?
  • classification vs regression
  • type of input: stationary vs time series
Common mistakes & pitfalls:
  • !! Attention, what is often wrong: how often / quality of algorithm (Albrecht)
  • a common mistake is using biased data distribution (e.g. classifying 5 classes and having data mainly from 2 of them / class-inbalance) (Andreas)
  • “wonderful performance” can still be worse (e.g. when recognizing touch for 95% of the cases, which can be highly annoying for the remaining 5%) vs. “being better than chance” (Nils)
    • g. dictating papers?! why do we write papers and not dictate them…
    • due to bad detection of vocabulary …?
    • cost of failure for wrong guesses is high in this example!!!
    • and: false classification can be annoying ….
      à what does machine learning to the user experience, if it is not a 100% correct?!
  • downrating? which parameters are optimized?
    • do not only optimize on training data
    • do not only report on training data (important is how your model fits for new data)
  • hyperparameter optimization on training data is okay but has to be tested with test samples (final validation is necessary!)
  • attention: many things can go wrong in machine learning (e.g. parameter choice)
  • lack of expertise in HCI communities (authors as well as reviewers …)

==> in HCI: put your model into an application!

Q: How much can we trust the “black box” (machine learning) decisions?
  • x% certain / confidence value
  • evaluate methods properly in “real” conditions, test how systems perform in the real scenario
  • interpreting machine learning models, allowing people to understand is important (e.g. providing labels & explanations), make it interpretable for users
  • confusion matrices: also report on errors, is that of relevance? yes! show how good your model works on which class / do not use data with a lot of “confusion”
Applications
  • Kinect as one of the best examples of what was done with (even imperfect) recognition
  • g. design the UI to fit the performance of the machine learning algorithm, understand the model’s imperfections when designing a UI!
  • trade-off: do not offer something in the UI what you cannot support by your model!
  • Adaptation:
    • do not only adapt the system to the user, but also the system to the underlying model
    • user can influence the performance of the algorithm (improve it)
      à consistent imperfection may also be fine, as long as you understand the error
    • g.: people pronounce words differently to make them be recognized by voice recognition system (à user adapts to the system and can improve its performance)
  • give intelligent systems to adaptive, intelligent people! ==>  we learn to compensate potential errors
  • but: users adapt so fast, deliver more data ==> can we train on this data?!
    (is it “reinforcement learning”, if the user has already adapted?!)
Important Points in HCI:
  • interactive machine learning
    • probably most interesting for HCI
    • human can provide additional supervision and speed up learning
    • but also machine can help the user and improves (à optimization goals)
  • visual analytics
    • was originally done by looking at images
    • is now AI / machine learning
  • making machine learning decisions understandable / give explanations
    • is that even possible?
    • yes, give the user reasons why the system came to a certain decision!
    • g., AlgorithmWatch
    • but: what to do with big decision trees?
    • in theory: try different inputs, see outputs (without knowing the weights)
    • but: explanations may be unfair / should not be explained (e.g. model may have learned from race & educational background … )
  • explainable machine learning: do not go “into the box”, but tell for which input parameters was optimized à as legal requirement
  • open up your decisions, data and optimization (for what did you optimize your model) for your costumer
    ==> “take the magic out” ==> legislation
  • how much to show // hide? (e.g. just cost-function vs. whole handcrafted algorithm and model)
Ethics
  • should we demand a machine learning system to decide according to our human values?! how to decide then?!
  • neural networks as bots got a lot of attention à learned “bad behaviour”==>  can we (rather: systems) learn ethics from the data that is out there?!
  • how about teaching systems to make decisions like we would do it?
    (i.e. not based on race)
  • the best algorithm for required prediction vs algorithm that excludes certain features (discrimination, gender, race,… )
  • perfect rational, data-based decisions?
    à we humans can not deal with that, we are subjective & “imperfect”
    à how to then deal with the machine learning decision? (systems know all data etc )
Big Data Example
  • Does it really matter if Facebook applies fancy machine learning magic that nobody fully understands?! What is behind that? Is there a difference if it is machine learning or handcrafted code?
Q: How do you think that AI can influence our decisions (politics, social media bubble etc)?
  • influencing decisions happens on Amazon, Facebook, google search, trip advisor … (American elections …)
  • is already there! in daily life …
  • lobby exists // out of control …
  • Important challenge! Relevant to society! High impact! (Influence of facebook, influencing moods vs nuclear bombs – what does actually have a higher impact??)
  • do not do that afterwards, think about the influence of machine learning in advance
    • may have bad consequences for some people (e.g., “gay detector”)
    • How to avoid that (bad consequences)?
    • And even if restrictions may exist: how to avoid people training their own model? ==> out of control! fundamental!
Q: Who tells us, if AI is “the thing”? Why to learn machine learning?  (necessary: “tabula-rasa” learning, how to implement human learning?)
  • compare it to learning programming languages ==> do more interesting stuff out of what you (personally) learn
  • machine learning is a “design material“! ==> we can do amazing things with it in HCI