Recommender Systems for Health Informatics Stateoftheart and Future Perspectives ppt
Current as of: June, 21, 2019 – 21:00 CET
Auto Learning for Wellness Informatics
"It is remarkable that a science which began with the consideration of games of c hance
should take get the nearly important object of man knowledge"
Pierre Simon de Laplace, 1812.
2019S, 2.0 h 3.0 ECTS, Blazon: VU Lecture with Excercises, Language: English language
Venue: Vienna University of Technology > Faculty of Informatics
*Form OF 2019 STARTS: Tuesday, twelfth March 2019, 17:thirty – 20:xxx*
Seminarraum 127, Gußhausstrasse 25-29, Stiege 1, 3. Stock
>> Link to TISS
Lecturers: Andreas HOLZINGER, Holzinger Group HCI-KDD
Rudi FREUND, Theory & Logic Grouping
Tutors: Marcus BLOICE, Anna SARANTI, Florian ENDEL
>> Course Syllabus Course of 2019
Questions to: andreas.holzinger AT tuwien.ac.at
Course Venue: The grade takes place in Seminarraum 127, Gußhausstrasse 27-29, Stiege 1, three. Stock
Introduction Paper: Machine Learning for Health Computer science (pdf, 2,000 kB – reference every bit: Andreas Holzinger 2016. Machine Learning for Wellness Informatics. In: Lecture Notes in Artificial Intelligence LNAI 9605. Springer, pp. 1-24, doi:10.1007/978-3-319-50478-0_1)
Introduction Video: https://www.youtube.com/watch?5=lc2hvuh0FwQ (Students please watch this video first)
Grade Goals:
Wellness is increasingly turning into a data driven business. AI/ML provides the necessary methods, algorithms and tools, and the wellness domain provides the necessary information and domain expertise. To enable successful solutions for the benefit of the patients, health industry urgently needs a new kind of graduates!
This graduate course follows a enquiry-based teaching (RBT) approach and discusses experimental methods for combining man intelligence with automobile learning to solve problems from health informatics. The focus of the course of 2019 is even more on explainable-AI, causality and upstanding, social and public problems of AI/ML for health informatics. Encounter here a short (6 min) Youtube Video on explainable AI
For German readers: Andreas Holzinger 2018. Explainable AI (ex-AI). Informatik-Spektrum, 41, (2), 138-143, doi:10.1007/s00287-018-1102-5
For practical applications we focus on Python – which is to date the worldwide nearly used ML-language. Tutorial: Python-Tutorial-for-Students-Machine-Learning-course (pdf, ii,279 kB – reference every bit: Marcus D. Bloice & Andreas Holzinger 2016. A Tutorial on Machine Learning and Data Scientific discipline Tools with Python. In: Lecture Notes in Artificial Intelligence LNAI 9605. Springer, pp. 437-483, doi:10.1007/978-three-319-50478-0_22)
Why should I report
AI/Machine Learning for Health Information science?
1) AI/machine learning (> differences) for health is speedily growing
AI/Machine learning (ML) is the most growing field in computer science (Jordan & Mitchell, 2015. Machine learning: Trends, perspectives, and prospects. Science, 349, (6245), 255-260), and it is well accepted that health informatics is amongst the greatest challenges (LeCun, Bengio, & Hinton, 2015. Deep learning. Nature, 521, (7553), 436-444), with Privacy Aware Car Learning (PAML) as a must!
The time to come of medicine is in the data and privacy aware auto (un-)learning is no longer a nice to have, but a must.
ML is changing the hereafter of health: Internationally outstanding universities count on the combination of automobile learning and wellness information science and aggrandize these fields, for example: Carnegie-Mellon Academy, Harvard, Stanford – or in Europe ETH, RWTH just to proper noun a few!
ii) AI/machine learning for health information science pose enormous Business Opportunities:
McKinsey: An executive'south guide to machine learning
NY Times: The Race Is On to Command Artificial Intelligence, and Tech'south Future
Economist: Million-dollar babies
3) AI/machine learning for health informatics provide career chances for TU graduates:
"Fei-Fei Li, a Stanford University professor who is an expert in computer vision, said ane of her Ph.D. candidates had an offer for a chore paying more than $i million a year, and that was only 1 of 4 from big and small companies."
https://www.mckinsey.com/industries/high-tech/our-insights/an-executives-guide-to-automobile-learning
4) AI/motorcar learning for health informatics offers market opportunities for spin-offs:
"By 2020, the marketplace for machine learning applications volition reach $xl billion, IDC, a market enquiry business firm, estimates.
By 2018, IDC predicts, at least 50 pct of developers volition include A.I. features in what they create."
https://www.nytimes.com/2016/03/26/technology/the-race-is-on-to-control-bogus-intelligence-and-techs-time to come.html?_r=2
Description:
The goal of ML is to develop algorithms which can learn and improve over time and tin can be used for predictions. In automatic Machine learning (aML), not bad advances have been made, e.g., in oral communication recognition, recommender systems, or autonomous vehicles. Automatic approaches, east.1000. deep learning, greatly benefit from large data with many training sets. In the health domain, sometimes we are confronted with a modest number of data sets or rare events, where aML-approaches suffer of insufficient preparation samples. Here interactive Machine Learning (iML) may exist of help, having its roots in Reinforcement Learning (RL), Preference Learning (PL) and Active Learning (AL). The term iML can exist defined as algorithms that can interact with agents and can optimize their learning behaviour through these interactions, where the agents can also exist human being. This homo-in-the-loop can be beneficial in solving computationally hard problems, e.g., subspace clustering, protein folding, or k-anonymization, where human expertise can help to reduce an exponential search space through heuristic selection of samples. Therefore, what would otherwise be an NP-hard trouble reduces greatly in complexity through the input and the assistance of a human agent involved in the learning phase. However, although humans are excellent at design recognition in dimensions of ≤3; near biomedical information sets are in dimensions much higher than three, making transmission data analysis very difficult. Successful application of car learning in wellness informatics requires to consider the whole pipeline from data preprocessing to data visualization. Consequently, this course fosters the HCI-KDD approach, which encompasses a synergistic combination of methods from two areas to unravel such challenges: Human-Computer Interaction (HCI) and Cognition Discovery/Data Mining (KDD), with the goal of supporting human being intelligence with car learning.
Grading:
Machine learning is a highly practical field, consequently this class is a VU: in that location will exist a written exam at the end of the course, and during the course the students will solve related assignments. ECTS Breakdown: 75 hours in 15 hours lecture, 15 hours preparation for the lecture and practicals, 30 hours assignments, fifteen hours training for the 1 60 minutes written exam.
Grade Content:
For the successful application of ML in wellness computer science a comprehensive understanding of the whole HCI-KDD-pipeline, ranging from the concrete data ecosystem to the understanding of the cease-user in the trouble domain is necessary. In the medical world the inclusion of privacy, information protection, safety and security is mandatory.
Differentiation from and bridging to existing courses:
At the TU Vienna are currently the following courses on "machine learning", i.e.
183.605 Machine Learning for Visual Computing, three VU, iv,5 ECTS, in winter term, which deals with linear models for regression and classification (Perceptron, Linear Basis Function Models), applications in computer vision, neural nets, error functions and optimization (e.grand., pseudo-changed, gradient descent, newton method), model complexity, regularization, model selection, Vapnik-Chernovenkis dimension, kernel methods: duality, sparsity, Support Vector Auto, principal component analysis and Hebbian rule, canonical correlation analysis, Bayesian regression, relevance vector machine, clustering und vector quantization (e.1000., 1000-means), overview of deep learning models; the ECTS breakdown is as follows: 112,five hours in thirty hours lecture time, 70 hours for two assignments, 2,5 hours interviews, ane 60 minutes written examination plus 9 hours preparation time.
183.663 Deep Learning for Visual Computing, 2 VU, three ECTS, in wintertime term, covers Deep Learning for automatic image assay, east.g. for classifying images into categories or detecting and distinguishing persons; where deep Learning has recently lead to breakthroughs; in certain bug, the performance of current methods based on this technology is similar or even meliorate than that of humans – a novelty in this field. The goal of this lecture is to provide a comprehensive introduction to Deep Learning and its application for solving practical problems, i.e. Computer Vision and Image Processing, parametric models, iterative optimization, feedforward Neural Networks, backpropagation, convolutional Neural Networks for classification, detection, and partition, Software libraries and practical aspects, Preprocessing, data augmentation, regularization, visualizations, guest lectures on medical applications and ethical aspects; the ECTS breakdown is as follows: 75 hours in 16 hours lectures, 34 hours programming exercises, 24 hours exam preparation, ane hour written exam
184.702 3VU Automobile Learning, in winter term, which deals mainly with principles of supervised and unsupervised ML, including pre-processing and data preparation, as well equally evaluation of Learning Systems. ML models discussed may include east.1000. Decision Tree Learning, Model Option, Bayesian Networks, Back up Vector Machines, Random Forests, Hidden Markov Models, as well as ensemble methods;
As well from focusing on wellness informatics (biological, biomedical, medical, clinical) and health related problems, we will build on and refer to the courses higher up, to avoid any parallelization, thus will particularly focus on solving bug of health with other ML-approaches (both aML and iML).
Consequently, this class is an addtional benefit for the students of computer science to foster car learning and to show some examples in the of import surface area of health information science which is currently a hot topic internationally and opens a lot of hereafter opportunities.
Lecture 01 – Week 11, Tuesday, March, 12, 2019
Module 01 – Introduction: Motorcar learning for health informatics: Introduction, challenges and future directions
Lecture Outline: In this outset module we get a rough overview on the differences between automatic machine learnig and interactive machine learning and discuss a few hereafter challenges of the HCAI approach to ensure ethical responsible AI/ML. This shall emphasize the integrative ML arroyo, where at first we learn from prior data, then extract knowledge in order to generalize and to notice certain patterns in the data and employ these to make predictions and assist to make decisons under uncertainty. The yard futurity goal is in understandability, re-traceability and explainability.
Lecture Keywords: HCI-KDD approach, integrative AI/ML, complexity, automatic ML, interactive ML, explainable AI
Topic 01: The HCI-KDD arroyo: towards integrative machine learning
Topic 02: Application Area Health: On the complexity of wellness information science
Topic 03: Probabilistic learning on the example of Gaussian processes
Topic 04: Automatic Automobile Learning (aML)
Topic 05: Interactive Machine Learning (iML)
Topic 06: Causality vs. Causability
Topic 07: Towards explainable-AI
Conclusion and Future Outlook
- Lecture slides total size (seven.403 KB)
- Lecture slides 3 x 3 (4.440 kB)
To get a preview you tin can have a wait at the slides of the final course years: 2018, 2017, 2016
however, please note that for the 2019 exam of course the 2019 slides are relevant
Learning Goals: At the terminate of the start lecture the students …
+ get aware of some issues of the application domain medicine and health
+ have an overview on current trends, challenges, hot topics and future aspects of AI/ML for wellness informatics
+ know the differences, advantages and disadvantages of automated ML and interactive ML
+ go an agreement of the importance of re-traceability, transparency, explainability and causality
Reading for Students: (some prereading/postreading and video recommendations):
- Holzinger, A. 2016. Interactive Auto Learning for Health Informatics: When do nosotros need the man-in-the-loop? Springer Encephalon Informatics, 1-thirteen. doi: doi: 10.1007/s40708-016-0042-6
- Dossier: HOLZINGER (2016) Dossier interactive Machine Learning Health Informatics
- Watch the video of Andreas Holzinger: https://youtu.be/lc2hvuh0FwQ
- Watch the video of Google DeepMindHealth: https://youtu.exist/teZ2m5oTKwM
- "Medicine is so complex, the challenges are so great … nosotros need everything that nosotros tin bring to make our diagnostics more precise, more accurate and our therapeutics more focused on that patient." Sir Malcolm GRANT, NHS England, in: Machine learning : ROYAL Gild Briefing report, Part of the conference serial Quantum scientific discipline and technologies Transforming our future with machine learning), https://royalsociety.org/topics-policy/projects/auto-learning
Watch the videos: https://www.youtube.com/playlist?list=PLg7f-TkW11iX3JlGjgbM2s8E1jKSXUTsG
Lecture 02 – Week 12, Tuesday, March, 19, 2019
Module 02 – From Clinical Decision Making to explainable AI (ex-AI): selected methods of transparent automobile learning
Lecture Outline: Medical activity is permanent decsion making under uncertainty inside limited time ("5 -Minutes"). The trouble of the almost successful AI/ML methods (e.g. deep learning; see the differences between AI-ML-DL hither) is that they are often considered to be "black-boxes" which is not quite true. Nevertheless, even if nosotros understand the underlying mathematical and theoretical principles, it is difficult to re-enact and to answer the question of why a certain motorcar decision has been reached. A general serious drawback is that such models have no explicit declarative knowledge representation, hence have difficulty in generating the required explanatory structures – the context – which considerably limits the achievement of their full potential. Interestingly the "sometime symbolic and logic based AI-approaches" did have such explanatory structures, at least for a very narrow domain space. One futurity goal is in implicit cognition elicitation through efficient human-AI interfaces.
Lecture Keywords: clinical decsion making, transparency, re-traceability, re-enaction, re-producibility, explainability
Topic 01 Decison Support Systems (DSS): Can Computers assistance making meliorate decisions? Students read [1]
Topic 02 History of DSS = History of AI – explainable AI is actually the oldest field of Artificial Intelligence
Topic 03 Medical Informatics Instance: Towards P4 Medicine
Topic 04 Medical Informatics Example: Example Based Reasoing (CBR)
Topic 05 Causal Reasoning
Topic 06 Explainability – Causality – Causability Students read [2]
Topic 07 (Some) Methods of Explainable AI
- Lecture slides full size (10.445 kB): 02-185A83-HOLZINGER-EXPLAINABLE-AI-2019ah
- Lecture slides 3 x 3 (8.484 kB): 02-185A83-HOLZINGER-EXPLAINABLE-AI-2019-3x3ah
To get a preview you can have a look at the slides of the concluding course years: 2018, 2017, 2016
even so, delight notation that for the 2019 test of course the 2019 slides are relevant
Learning Goals: At the terminate of this lecture the students …
+ know the roots of decision making and early concepts of medical decision support systems (Advice Taker, MYCIN, GAMUTS)
+ encounter some examples of the problematic of medical decision making
+ word of decsion support of medical experts by AI-systems (as well ethical responsibility issues)
+ have a commencement overview on the principles of explainable AI-methods
+ know some of the near relevant methods of explainable AI
for more details please go to the course page (taking place each semester in Graz):
https://man-centered.ai/explainable-ai-causability-2019
Student read:
[i] Michael Duerr-Specht, Randy Goebel & Andreas Holzinger 2015. Medicine and Health Care every bit a Information Trouble: Will Computers become improve medical doctors? In: Holzinger, Andreas, Roecker, Carsten & Ziefle, Martina (eds.) Smart Health, State-of-the-Art SOTA Lecture Notes in Reckoner Scientific discipline LNCS 8700. Heidelberg, Berlin, New York: Springer, pp. 21-twoscore, doi:10.1007/978-3-319-16226-3_2.
[Can-Computers-assist-doctors-making-improve-decisions]
[2] Andreas Holzinger, Georg Langs, Helmut Denk, Kurt Zatloukal & Heimo Mueller 2019. Causability and Explainability of AI in Medicine. Wiley Interdisciplinary Reviews: Information Mining and Knowledge Discovery, doi:10.1002/widm.1312.
Lecture 03 – Week thirteen, Tuesday, March, 26, 2018
Tutorial T1 & Consignment A1 (Tutor: Anna SARANTI): Layer-wise Relevance Propagation (LRP)
Tutorial T2 & Consignment A2 (Tutor: Marcus BLOICE): Data Discovery and Transfer Learning on Pare Lesion Images
All material can be found on our GitHub folio:
https://github.com/homo-centered-ai-lab/cla-185A83-machine-learning-wellness-class-2019
Lecture 04 – Week 14, Tuesday, April, 2, 2019
Module 03 – Probabilistic Graphical Models: From Knowledge Representation to Graph Model Learning
Lecture Outline: In order to get well prepared for the second tutorial on probabilistic programming, this module provides some nuts on graphical models and goes towards methods for Monte Carlo sampling from probability distributions based on Markov Bondage (MCMC). This is non only very important, it is awesome, as information technology is like as our brain may work. It allows for computing hierachical models having a large number of unknown parameters and as well works well for rare event sampling wich is oft the case in the wellness information science domain. So, nosotros start with reasoning under dubiousness, provide some basics on graphical models and go towards graph model learning. 1 item MCMC method is the then-called Metropolis-Hastings algorithm which obtains a sequence of random samples from loftier-dimensional probability distributions -which we are often challenged in the health domain. The algorithm is among the superlative 10 near important algorithms and is named after Nicholas City (1915-1999) and Wilfred K. HASTINGS (1930-2016); the former found it in 1953 and the latter generalized it in 1970 (retrieve: Generalization is a grand goal in science).
Lecture Keywords: Reasoning nether uncertainty, graph extraction, network medicine, metrics and measures, point-deject data sets, graphical model learning, MCMC, Metropolis-Hastings Algorithm
Topic 01 Decision Making under uncertainty
Topic 02 Some basics of Markov Processes
Topic 03 A few fundamentals of Concept Learning
Topic 04 Essentials of Graphs/Networks and challenges
Topic 05 Bayes' Nets
Topic 06 Graphical Model Learning
Topic 07 Probabilistic Programming
Topic 08 Markov Chain Monte Carlo (MCMC)
Topic 09 Example: Metropolis Hastings Algorithm
- Lecture slides total size (eight.858 KB): iv-185A83-GRAPHICAL-MODELS-I-HOLZINGER-2019
- Lecture slides 3 x 3 (8.689 KB): 4-185A83-GRAPHICAL-MODELS-I-HOLZINGER-2019-3×3
To get a preview you can take a look at the slides of the last course years: 2018, 2017, 2016
however, delight annotation that for the 2019 exam of course the 2019 slides are relevant
Learning Goals: At the end of this lecture the students
+ are aware of reasoining and decision making
+ have an idea of graphical models
+ understand the advantages of probabilistic programming
Reading for Students:
- Bishop, Christopher K (2007) Pattern Recognition and Machine Learning. Heidelberg: Springer [Chapter 8: Graphical Models]
- Chenney, South. & Forsyth, D. A. 2000. Sampling plausible solutions to multi-torso constraint problems. Proceedings of the 27th almanac briefing on Reckoner graphics and interactive techniques. ACM. 219-228, doi:10.1145/344779.344882.
- Ghahramani, Z. 2015. Probabilistic machine learning and artificial intelligence. Nature, 521, (7553), 452-459, doi:10.1038/nature14541
- Gordon, A. D., Henzinger, T. A., Nori, A. V. & Rajamani, S. 1000. Probabilistic programming. Proceedings of the on Future of Software Applied science, 2014. ACM, 167-181, doi:10.1145/2593882.2593900
- KOLLER, Daphne & FRIEDMAN, Nir (2009) Probabilistic graphical models: principles and techniques. Cambridge (MA): MIT printing.
- Urban center, Due north., Rosenbluth, A. W., Rosenbluth, M. Due north., Teller, A. H. & Teller, Due east. 1953. Equation of State Calculations by Fast Calculating Machines. The Journal of Chemic Physics, 21, (half-dozen), 1087-1092, doi:10.1063/1.1699114. (34,123 citations as of 21.03.2017)
- Wainwright, Martin J. & Jordan, Michael I. (2008) Graphical Models, Exponential Families, and Variational Inference. Foundations and Trends in Machine Learning, Vol.1, 1-2, one-305, doi: 10.1561/2200000001 [Link to pdf]
- Wood, F., Van De Meent, J.-Due west. & Mansinghka, V. A New Approach to Probabilistic Programming Inference. AISTATS, 2014. 1024-1032.
A hot topic in ML are graph bandits:
- Villar, S. Southward., Bowden, J. & Wason, J. 2015. Multi-armed Brigand Models for the Optimal Design of Clinical Trials: Benefits and Challenges. Statistical Science, 199-215, doi:ten.1214/14-STS504, accesible via: https://arxiv.org/abs/1507.08025
Lecture 05 – Week xv – Apr, 9, 2019
Tutorial T2 – Probabilistic Programming with Python (Tutor: Florian ENDEL) and second assigment
In this tutorial, nosotros volition explore probabilistic programming with the Python framework PyMC3. "Probabilistic programming allows for automated Bayesian inference on user-divers probabilistic models." [ane]
Nosotros will first with a brief repetition of the previous lecture by discussing the Bayes' theorem, Bayesian models and Bayesian parameter estimation using Markov Chain Monte Carlo (MCMC) sampling. Side by side on, we volition dive deeper into the capabilities, workflow and specific utilization of PyMC3. Language primitives, stochastic variables and the intuitive syntax to define complex models and networks will exist explored. Increasingly circuitous examples including, e.m., a elementary statistical test, linear (LM) and generalized linear (GLM) models as well as multilevel modelling will highlight the applicability of Bayes' methodology too as the potential and simplicity of probabilistic programming with PyMC3. An exercise based on real world research [ii] will demonstrate the advantage of multilevel modelling and probabilistic programming.
Introduction to PyMC3: https://florian.endel.at/Presentation/PyMC3Intro/
Assignment Educational activity: Exercise-Therapeutic-Touch-LV185A83-2018
The 2019 class will again cover Multilevel Modelling (adapted from Chris Fonnesbeck):
https://florian.endel.at/Presentation/PyMC3Intro/multilevel_modeling#/
Please refer to our Github pages: https://github.com/human-centered-ai-lab/cla-185A83-auto-learning-health-grade-2019
[1] John Salvatier, Thomas 5. Wiecki & Christopher Fonnesbeck 2016. Probabilistic programming in Python using PyMC3. PeerJ Computer science, ii, e55, doi:10.7717/peerj-cs.55
[2] Linda Rosa, Emily Rosa, Larry Sarner & Stephen Barrett 1998. A Close Expect at Therapeutic Touch. JAMA, 279, (thirteen), 1005-1010, doi:x.1001/jama.279.xiii.1005
Additional resources:
Lecture slides 2017: total size (815 kB) 2017-04-04 Probabilistic Programming – Endel
Examples 2017: https://github.com/FlorianEndel/Probabilistic-Programming-Tutorial
MCMC: https://chi-feng.github.io/mcmc-demo/app.html
[3] A. Pfeffer, Practical probabilistic programming. Shelter Island, NY: Manning Publications, Co, 2016.
[4] C. Davidson-Pilon, Bayesian methods for hackers: probabilistic programming and Bayesian inference. New York: Addison-Wesley, 2016.
[5] J. K. Kruschke, Doing Bayesian data analysis: a tutorial with R, JAGS, and Stan, Edition ii. Boston: Academic Press, 2015.
Easter pause (collect Ester eggs and do your programming assignments)
Lecture 06 – Week 18, Tuesday, April, xxx, 2019
Data for Machine Learning: Quality, fusion, integration, probabilistic information and entropy
Lecture Outline: In this lecture we will review some fundamentals on data and information. In club to conduct out successful automobile learning, we demand not only appropriate algorithms, merely higher up all data. All the same, it is non only important to have sufficient large amounts of data, but also to have relevant data and the corresponding domain knowledge. You will always get a issue, the crucial question is whether and to what extent the results are relevant to support medical decision making from incertitude; and here nosotros demand the concept of Bayes and Laplace and entropy every bit a measure of dubiousness distributions, and KL divergence as a way of measuring the matching between two distributions.
Lecture Keywords: data, information, probability, entropy, cantankerous-entropy, Kullback-Leibler divergence
Topic 01 Data – The underlying physics of data
Topic 02 Data – Biomedical data sources – taxonomy of data
Topic 03 Data – Integration, Mapping and Fusion of data
Topic 04 Data – Bayes and Laplace probabilistic information p(x)
Topic 05 Information Theory – Data Entropy
Topic 06 Information Cross-Entropy and Kullback-Leibler Divergence
- Lecture slides full size (pdf, 6495 KB)
- Lecture slides three x 3 (pdf, 9632 kB)
To get a preview you can have a wait at the slides of the last course years: 2018, 2017, 2016
even so, please note that for the 2019 exam of form the 2019 slides are relevant
Learning Goals: At the end of this lecture the students
+ are enlightened of the problematic of wellness information and sympathize the importance of data integration in the life sciences.
+ sympathise the concept of probabilistic information with a focus on the problem of estimating the parameters of a Gaussian distribution (maximum likelihood problem).
+ recognize the usefulness of the relative entropy, called Kullback–Leibler departure which is very important, particularly for sparse variational methods between stochastic processes.
Reading for Students: (some prereading/postreading recommendations):
- Banerjee, O., El Ghaoui, Fifty. & D'aspremont, A. 2008. Model choice through sparse maximum likelihood estimation for multivariate gaussian or binary data. The Journal of Machine Learning Enquiry, 9, 485-516, https://world wide web.jmlr.org/papers/v9/banerjee08a.html
- De Boer, P.-T., Kroese, D. P., Mannor, Southward. & Rubinstein, R. Y. 2005. A tutorial on the cross-entropy method. Annals of operations research, 134, (1), 19-67. doi:x.1007/s10479-005-5724-z
- Galas, D. J., Dewey, T. M., Kunert-Graf, J. & Sakhanenko, N. A. 2017. Expansion of the Kullback-Leibler Divergence, and a new class of information metrics. arXiv:1702.00033.
- Holzinger, A., Dehmer, M. & Jurisica, I. (2014). Noesis Discovery and interactive Data Mining in Bioinformatics – Land-of-the-Art, future challenges and inquiry directions. BMC Bioinformatics, 15, (S6), I1. doi:10.1186/1471-2105-xv-S6-I1
- Holzinger, A., Hörtenhuber, M., Mayer, C., Bachler, Thou., Wassertheurer, S., Pinho, A. & Koslicki, D. (2014). On Entropy-Based Data Mining. In: Holzinger, A. & Jurisica, I. (eds.) Interactive Noesis Discovery and Data Mining in Biomedical Information science, Lecture Notes in Computer Scientific discipline, LNCS 8401. Berlin Heidelberg: Springer, pp. 209-226. doi:10.1007/978-3-662-43968-5_12
Online bachelor: https://pure.tugraz.at/portal/files/3108669/HOLZINGER_Entropy_based_data_mining.pdf - Loshchilov, Ilya, Schoenauer, Marc & Sebag, Michele (2013). KL-based Control of the Learning Schedule for Surrogate Black-Box Optimization. arXiv:1308.2655.
- Matthews, A., Hensman, J., Turner, R. E. & Ghahramani, Z. On thin variational methods and the Kullback-Leibler deviation between stochastic processes. Proceedings of the Nineteenth International Conference on Artificial Intelligence and Statistics (AISTATS), 2016. JMLR, 231-239 https://www.jmlr.org/proceedings/papers/v51/matthews16.html
Boosted Reading: (to foster a deeper understanding of data theory related to the life sciences):
- Manca, Vincenzo (2013). Infobiotics: Information in Biotic Systems. Heidelberg: Springer. (This volume is a fascinating journey through the globe of discrete biomathematics and a continuation of the 1944 Paper by Erwin Schrödinger: What Is Life? The Concrete Aspect of the Living Jail cell, Dublin, Dublin Establish for Advanced Studies at Trinity College)
Lecture 07, Tuesday, May, 7, 2019
Module 05 – Causality, Explainability, Ethical, Legal and Social issues of AI/ML in health computer science
Keywords: Causality, Graphical Causal Models, AI Ethics
Topic 01: Causality
Topic 02: Explainability and Causability
Topic 03: AI Ideals
Topic 04: Social Issues of AI
- Lecture slides full size (four.746 KB): 07-185A83-HOLZINGER-CAUSALITY-AI-Ethics-2019
- Lecture slides 3 ten 3 (i.924 KB): 07-185A83-HOLZINGER-CAUSALITY-AI-ETHICS-2019-3×3
To get a preview you lot can accept a look at the slides of the final grade years: 2018, 2017, 2016
however, please note that for the 2019 exam of course the 2019 slides are relevant
Learning Goals: At the end of this lecture the students …
+ have a basic overview on the trouble of causality, causal inference and functional causal models
+ tin can compare the problems of causality with bug of usability
+ have aquired some undertanding of reasoning (deductive, inductive, abductive)
+ are aware of the difficulty of hard inference problems
+ have a feeling on the problems of AI Ethics, laws and social issues of AI
Reading for students:
[0] Jonas Peters, Dominik Janzing & Bernhard Schölkopf 2017. Elements of causal inference: foundations and learning algorithms, Cambridge (MA), online bachelor at:
https://web.math.ku.dk/~peters/elements.html
[1] Judea Pearl 1988. Evidential reasoning under doubt. In: Shrobe, Howard E. (ed.) Exploring bogus intelligence. San Mateo (CA): Morgan Kaufmann, pp. 381-418.
[2] Matt J. Kusner, Joshua Loftus, Chris Russell & Ricardo Silva. Counterfactual fairness. In: Guyon, Isabelle, Luxburg, Ulrike Von, Bengio, Samy, Wallach, Hanna, Fergus, Rob & Vishwanathan, S.V.N., eds. Advances in Neural Information Processing Systems 30 (NIPS 2017), 2017. 4066-4076.
[3] Judea Pearl 2009. Causality: Models, Reasoning, and Inference (2nd Edition), Cambridge, Cambridge Academy Press.
[4] Judea Pearl 2018. Theoretical Impediments to Auto Learning With Seven Sparks from the Causal Revolution. arXiv:1801.04016.
Terminal Lecture 08, Tuesday, May, 28, 2019
The grading consists of 3 independent parts:
I) Final Exam (written test quiz, xxx%) – see sample examination here
STUDENT-LV-185A83-Machine-Learning-for-Health-Informatics-Examination-course-of-2019
II) Presentations of the assigments (orally, 10 %)
Three) Grading of the assignments (coding, xx % each, 60 % full)
Note: The class will be adpated to the students appropriately as the course progresses. Each lecture is preceded past a quiz from the concluding lecture. The slides volition exist put online AFTER each lecture – and only those are binding for the last exam. Annotation that the slides presented and the slides showed on the Web tin exist different for didactical purposes.
Curt Bio of Lecturer:
Andreas HOLZINGER <expertise> promotes a synergistic approach to Human being-Centred AI (HCAI) and has pioneered in interactive car learning (iML) with the man-in-the-loop. He promotes an integrated machine learning approach with the goal to broaden human intelligence with artificial intelligence to assist to solve issues in wellness informatics.
Due to raising upstanding, social and legal issues governed by the Eu, future AI supported systems must be made transparent, re-traceable, thus man interpretable. Andreas' aim is to explain why a auto decision has been reached, paving the manner towards explainable AI and Causability, ultimately fostering ethical responsible machine learning, trust and credence for AI.
Andreas obtained a Ph.D. in Cognitive Scientific discipline from Graz Academy in 1998 and his Habilitation (second Ph.D.) in Computer science from Graz University of Technology in 2003. Andreas was Visiting Professor for Car Learning & Knowledge Extraction in Verona, RWTH Aachen, Academy Higher London and Middlesex University London. Since 2016 Andreas is Visiting Professor for Machine Learning in Wellness Information science at the Faculty of Information science at Vienna University of Technology. Currently, Andreas is Visiting Professor for explainable AI, Alberta Motorcar Intelligence Plant, Academy of Alberta, Canada.
Group Homepage: https://explainable-ai.org
Personal Homepage: https://world wide web.aholzinger.at
Youtube Introduction Video: https://youtu.exist/lc2hvuh0FwQ
Conference Homepage: https://cd-make.cyberspace
Brusque Bio of Tutors:
Marcus BLOICE is finishing his PhD this yr with the awarding of deep learning on medical images. Currently, he is working on the Augmentor project and the Digital Pathology projection, and is involved in the featureCloud project. He has a background in computer science from the University of Sunderland (UK). He is a developer in Python and has experience with the popular machine learning pipelines. Marcus has likewise experience in machie learning on large medical images.
Florian ENDEL started working as a database programmer in the full general field of healthcare research in 2007 – after gathering first experiences every bit loftier school instructor for two years and working as freelance Web designer, A specific highlight is the development and supervision of "GAP-DRG", a database holding massive amounts of reimbursement information from the Austrian social insurance system, since 2008. Since and so, he was part of several national and international research projects handling, among others, information management, data governance, statistical analytics and secure computing infrastructure. He is currently participating in the EU FP7 project CEPHOS-LINK, the FFG K-Projekt DEXHELPP and even so finishing his chief's thesis.
Anna SARANTI is only finalizing her Master's studies with a piece of work on Applying Probabilistic Graphical Models and Deep Reinforcement Learning in a Learning-Aware Application, supervised by Andreas Holzinger and Martin Ebner at Graz University of Technology. Anna is currently working as machine learning engieer in Vienna.
Boosted pointers and reading suggestions can be found a the
Learning Automobile Learning page
Excellent Ressources for excercises
Github repository by Alberto Blanco Garcés https://github.com/alberduris
Related Books in Machine Learning:
- MITCHELL, Tom M., 1997. Machine learning, New York: McGraw Colina. (Book Webpages)
Undoubtedly, this is the classic source from the pioneer of ML for getting a perfect first contact with the fascinating field of ML, for undergraduate and graduate students, and for developers and researchers. No previous background in artificial intelligence or statistics is required. - FLACH, Peter, 2012. Machine Learning: The Fine art and Science of Algorithms that Make Sense of Data. Cambridge: Cambridge University Press. (Book Webpages)
Introductory for advanced undergraduate or graduate students, at the same time aiming at interested academics and professionals with a background in neighbouring disciplines. Information technology includes necessary mathematical details, simply emphasizes on how-to. - Potato, Kevin, 2012. Machine learning: a probabilistic perspective. Cambridge (MA): MIT Press. (Volume Webpages)
This books focuses on probability, which can be applied to whatsoever problem involving uncertainty – which is highly the case in medical computer science! This book is suitable for advanced undergraduate or graduate students and needs some mathematical groundwork. - BISHOP, Christopher M., 2006. Design Recognition and Machine Learning. New York: Springer-Verlag. (Book Webpages)
This is a classic work and is aimed at advanced students and PhD students, researchers and practitioners, non asuming much mathematical knowledge. - HASTIE, Trevor, TIBSHIRANI, Robert, FRIEDMAN, Jerome, 2009. The Elements of Statistical Learning: Data Mining, Inference, and Prediction. New York: Springer-Verlag (Book Webpages)
This is the archetype background from supervised to unsupervised learning, with many applications in medicine, biology, finance, and marketing. For advanced undergraduates and graduates with some mathematical interest.
To get an agreement of the complication of the health information science domain:
- Andreas HOLZINGER, 2014. Biomedical Informatics: Discovering Knowledge in Big Information.
New York: Springer. (Book Webpage)
This is a students textbook for undergraduates, and graduate students in wellness information science, biomedical engineering, telematics or software engineering with an interest in noesis discovery. This book fosters an integrated approach, i.eastward. in the health sciences, a comprehensive and overarching overview of the data science ecosystem and knowledge discovery pipeline is essential. - Gregory A PETSKO & Dagmar RINGE, 2009. Protein Structure and Part (Primers in Biology). Oxford: Oxford University Press (Book Webpage)
This is a comprehensive introduction into the building blocks of life, a cute volume without ballast. It starts with the consideration of the link between poly peptide sequence and structure, and continues to explore the structural footing of protein functions and how this functions are controlled. - Ingvar EIDHAMMER, Inge JONASSEN, William R TAYLOR, 2004. Poly peptide Bioinformatics: An Algorithmic Approach to Sequence and Structure Analysis. Chicheser: Wiley.
Bioinformatics is the study of biological information and biological systems – such every bit of the relationships between the sequence, construction and function of genes and proteins. The subject has seen tremendous development in contempo years, and in that location are e'er-increasing needs for good understanding of quantitative methods in the study of proteins. This volume takes the novel arroyo of roofing both the sequence and construction analysis of proteins and from an algorithmic perspective.
Amid the many tools (we will concentrate on Python), some useful and popular ones include:
- WEKA. Since 1993, the Waikato Environs for Knowledge Analysis is a very popular open source tool. In 2005 Weka received the SIGKDD Information Mining and Knowledge Discovery Service Award: information technology is easy to acquire and easy to use [WEKA]
- Mathematica. Since 1988 a commercial symbolic mathematical computation arrangement, easy to use [Mathematica]
- MATLAB. Short for MATrix LABoratory, information technology is a commercial numerical calculating environment since 1984, coming with a proprietary programming language by MathWorks, very popular at Universities where it is licensed, awkward for daily practice [Matlab]
- R. Coming from the statistics community it is a very powerful tool implementing the Southward programming language, used by information scientists and analysts. [The R-Projection]
- Python. Currently perchance the most popular scientific language for ML [Python Software Foundation]
An fantabulous source for learning numerics and science with Python is: https://world wide web.scipy-lectures.org/ - Julia. Since 2012, raising scientific language for technical computing with better performance than Python. IJulia, a collaboration between the Jupyter and Julia, provides a powerful browser-based graphical notebook interface to Julia. [julialang.org]
Please have a expect at: What tools do people mostly use to solve bug?
Recommendable reading on tools include:
- Wes McKINNEY (2012) Python for Data Assay: Data Wrangling with Pandas, NumPy, and IPython. Beijing et al.: O'Reilly.
This is a practical introduction from the author of the Pandas library. [Google-Books] - Ivo BALBAERT (2015) Getting Started with Julia Programming. Birmingham: Packt Publishing.
A good start for the Julia language and more than focused on scientific calculating projects, it is assumed that you already know about a high-level dynamic language such as Python. [Google-Books]
International Courses on Machine Learning:
- Carnegie Mellon University > Auto Learning Course 10-701 2015
by Eric XING (expertise) and Ziv-Bar JOSEPH (expertise)
https://world wide web.cs.cmu.edu/~epxing/Class/10701/lecture.html - Carnegie Mellon Academy > Machine Learning Course x-701/15-781 2011
by Tom MITCHELL (expertise)
https://www.cs.cmu.edu/~tom/10701_sp11/ - Carnegie Mellon University > Machine Learning Course x-601 2015
by Maria-Florina BALCAN (expertise) and Tom MITCHELL (expertise)
https://www.cs.cmu.edu/~ninamf/courses/601sp15/ - Carnegie Mellon University > Machine Learning Course 10-701 2013
past Alex SMOLA (expertise)
https://alex.smola.org/teaching/cmu2013-10-701/index.html - Carnegie Mellon Academy > Auto Learnigng Course 10601b 2015
past Seyoung KIMhttps://world wide web.cs.cmu.edu/~10601b/ - Cornell University > Machine Learning CS 4780/5780 2014
past Thorsten JOACHIMS (expertise)
https://world wide web.cs.cornell.edu/courses/cs4780/2014fa/ - Cornell University > General Auto Learning, Cognition Extraction
and Data Science courses
https://machinelearning.cis.cornell.edu/pages/courses.php - Oxford > Department of Information science > Auto Learning: 2014-2015
by Nando de FREITAS (expertise)
https://www.cs.ox.ac.u.k./teaching/courses/2014-2015/ml/index.html
Conferences on Auto Learning with a special focus on health application
- CD-MAKE – Cantankerous Domain Conference for MAchine Learning and One thousandnowledge Extraction
https://cd-brand.net
- NIPS (now called NeurIPS) – has ever workshops on machine learning for health
https://neurips.cc - ICML – has also always workshops/sessions on and for health
https://icml.cc/
Pointers:
A) Students with a GENERAL interest in machine learning should definitely browse these sources:
- TALKING MACHINES – Human chat about motorcar learning by Katherine GORMAN and Ryan P. ADAMS <expertise>
excellent audio fabric – 24 episodes in 2015 and three new episodes in season two 2016 (as of fourteen.02.2016) - This Week in Machine Learning and Artificial Intelligence Podcast
https://twimlai.com - Data Skeptic – Data science, statistics, machine learning, artificial intelligence, and scientific skepticism
https://dataskeptic.com - VIDEOLECTURES.NET Machine learning talks (3,580 items up to 31.01.2017) ML is grouped into subtopics
and displayed as map – highly recommendable - TUTORIALS ON TOPICS IN MACHINE LEARNING by Bob Fisher from the University of Edinburgh, UK
B) Students with a SPECIFIC interest in interactive machine learning should have a wait at:
https://human being-centered.ai/lv-706-315-interactive-motorcar-learning/
This page is officially approved by HCI-KDD
Source: https://human-centered.ai/machine-learning-for-health-informatics-class-2019/
0 Response to "Recommender Systems for Health Informatics Stateoftheart and Future Perspectives ppt"
Postar um comentário