2022 SACAIR Tutorials
On Tuesday, 6 December | Tutorial Day | SACAIR2022 is presenting five tutorials.
The NITheCS Workshop is taking place in parallel with the Tutorials – please click here for more information on the NITheCS Workshop.
Please scroll down for more information on the 2022 SACAIR Tutorials:
What is AI Ethics and why should you care? | Emma Ruttkamp-Bloem | Professor | Head of the Department of Philosophy | University of Pretoria
Bio: Emma Ruttkamp-Bloem
Professor Emma Ruttkamp-Bloem is a philosopher of science and technology, a logician, an AI ethics policy adviser and machine ethics researcher. She is the Head of the Department of Philosophy at the University of Pretoria. Emma is the AI ethics lead at the Centre for AI Research (CAIR) and the chair of the Southern African Conference on AI Research (SACAIR). She is a full member of the International Academy for the Philosophy of Science. She has been the elected South African representative at the International Union of the History and Philosophy of Science and Technology (IUHPST) since 2014.
She was the Chairperson of the UNESCO Ad Hoc Expert Group that prepared the draft of the 2021 UNESCO Global Recommendation on the Ethics of AI. She is a current member of the UNESCO Ad Hoc Expert Group working on implementing the Recommendation. She is the rapporteur for the UNESCO Commission on the Ethics of Scientific Knowledge and Technology (COMEST). Emma is a member of the African Union Development Agency (AUDA)-NEPAD Consultative Roundtable on Ethics in Africa and of the African Commission Human and People’s Rights Committee (ACHPR) task team working on the Resolution 473 study on Human and Peoples’ Rights and AI, Robotics and other New and Emerging Technologies in Africa. She is the South African representative at the Responsible AI Network Africa (RAIN), which is a joint venture of the Technical University Munich and the Kwame Nkrumah University of Science and Technology in Ghana. She is the co-convener of the Ethics Working Group at the AI for Atoms Technical Meeting in 2021 at the International Atomic Energy Agency (IAEA). She is also a founding member of the Global South AI 4BetterFutures Consortium, and a member of the Global Academic Network at the Center for AI and Digital Policy, Washington DC.
Emma does duty on various international advisory boards: the Wallenberg AI, Autonomous Systems and Software Programme (Human Sciences) hosted by Umeå University in Sweden; the Global AI Ethics Institute; the International Group of Artificial Intelligence (IGOAI); the Innovation Hub on Artificial Intelligence for Sexual, Reproductive and Maternal Health in Africa (HASH) distributed between the Academy for Health Innovation in Uganda, the Makerere University AI lab in Kenia, and Sunbird AI; SAP SE, Germany; and the Interdisciplinary Centre of Digital Futures (ICDF) at the University of the Free State, South Africa.
AI technology has the power to change the lives of every person on earth for the better. It is however doubtful whether this potential will be realized given the business model on which AI technology innovation is built and economic and political inequality in the world. This situation as well as the very nature of data-driven AI raises a host of concerns about the impact of this technology on humankind as a whole, the environment, and individual human beings.
Emma will give a brief overview of the main concerns, and introduce AI ethics as a multi-faceted discipline. She will also explain that the notion of ethics at issue when engaging in AI ethics is dynamic, bottom-up, and reasoned.
Attendees will be invited to take part in debating the notion of responsible AI and, if time allows, analyse case studies to demonstrate the complexity of the issues facing supporters of responsible and ethical AI.
NVIDIA Fundamentals of Deep Learning | Dustin van der Haar | Associate Professor | University of Johannesburg
Bio: Dustin van der Haar
Dustin van der Haar is an Associate Professor in the Academy of Computer Science and Software Engineering at the University of Johannesburg. His primary field of interest is human-centred artificial intelligence, where he uses pattern recognition for good. In other words, Dustin focuses on developing algorithms that solve text, signal and vision-based human problems that matter. Dustin is the recipient of multiple awards and publications nationally and internationally in pattern recognition, which encompasses the fields of biometrics, computer vision, and medical image analysis. From keeping the wrong people out or perfecting your Gwara Gwara, all the way to creating software that makes sure your koeksusters are not too brown and your recycling is in the right place.
Discover how deep learning works through hands-on exercises in computer vision and natural language processing. You’ll train deep learning models from scratch, learning tools and tricks to achieve highly accurate results. You’ll also learn to leverage freely available, state-of-the-art pre-trained models to save time and get your deep learning application up and running quickly.
By participating in this workshop, you’ll:
- Learn the fundamental techniques and tools required to train a deep learning model
- Gain experience with common deep learning data types and model architectures
- Enhance datasets through data augmentation to improve model accuracy
- Leverage transfer learning between models to achieve efficient results with less data and computation
- Build confidence to take on your project with a modern deep learning framework
An understanding of fundamental programming concepts in Python, such as functions, loops, dictionaries, and arrays.
Generative adversarial networks (GANs): design, use-cases, and threats | Emile Engelbrecht | Ph.D. candidate | University of Stellenbosch
Bio: Emile Engelbrecht
Emile is a Ph.D. candidate in Electronic Engineering and Data Science at the University of Stellenbosch. His research involves applying GAN theory to improve the cost-efficiency and practicality of classification modeling.
Machine learning was revolutionized with the advent of generative adversarial networks or GANs. However, although GANs pose huge benefits to civil society, they have also become a greater threat to humanity. In this tutorial, Emile will describe GANs at both the high layman’s level to generalize their understanding and the low coding level for those interested in GAN theory. After this, Emile will describe and show examples of GAN applications (e.g. medical diagnosis and deep fakes), which will be the foundation for a conversation on the link between generative models and the simulation hypothesis. Finally, Emile will conclude with an open-floor discussion on how we might regulate these and future machine learning models.
The coding aspect of this tutorial will take approximately 30 mins, but this is by no means mandatory for audience members. Only those who wish to code their first baby GAN must bring a laptop with an internet connection (WiFi will be available).
Amazon SageMaker Clarify | Kgomotso Welcome | Solutions Architect | Amazon Web Services
Bio: Kgomotso Welcome
Kgomotso Welcome (Pronouns – she/her/hers) is a Solutions Architect from Amazon Web Services based in Cape Town, South Africa. She started her career in 2020 as a Graduate Cloud Support Associate at AWS and later moved to the AWS Solutions Architect team. Throughout her time at AWS, she specializes in AI and Machine Learning.
Presentation Content – Amazon SageMaker Clarify
With the increasing adoption of machine learning models in real-world applications, machine learning fairness has become a critical topic. Several recent studies have shown that machine learning is susceptible to bias issues such as discrimination in computational advertising, automated recruiting, image recognition, speech recognition, etc. Biased machine learning harms, discriminates and negatively stereotypes against underrepresented groups. Explaining ML models and understanding the reasoning behind the predictions is often difficult, but critical for responsible use of ML.
In this session, Kgomotso will be talking about Amazon SageMaker Clarify and present a short demonstration on Amazon SageMaker Clarify. Amazon SageMaker Clarify helps improve your machine learning models by detecting potential bias and helping explain how these models make predictions. The fairness and explainability functionality provided by SageMaker Clarify takes a step towards enabling AWS customers to build trustworthy and understandable machine learning models.
No prerequisites required. Just your presence and attention.
A pragmatic display of Human-AI Symbiosis | Danie Smit – BMW ZA Hub | Alex Botskor – BMW Group | Karel Kruger – Stellenbosch University
Bio: Danie Smit
Danie Smit is the general manager of the Analytics, AI and Platforms area in the BMW ZA Hub. He leads multiple departments, delivering on international products based on latest, cutting-edge technologies such as AI platforms, Conversational AI and the Cloud Data Hub. Danie is busy with his PhD at the University of Pretoria, on the organisational adoption of AI. Known by his team as a passionate, fresh and innovative thinker, Danie loves working from the southern tip of Africa in an agile global IT organisation, thereby learning and gaining experiences, from his regular interactions with people from over 20 different countries.
Bio: Alex Botskor
Alex Botskor is product manager for the digital vehicle file, BMW Group’s digital Twin for vehicle related data and located in BMWs headquarters in Munich. He is responsible to shape the digital vehicle file to create as much value as possible for its stakeholders e.g. in the areas of Circular Economy, Features on Demand or and Traceability. Alex joined BMW 8 years ago and gained experience in leading interdisciplinary teams in protoype and series production. Alex loves to think outside of the box and is always looking to push the limits and challenge the status quo with his team of fresh thinkers and innovators.
Bio: Karel Kruger
Dr Karel Kruger is a Senior Lecturer in the Department of Mechanical and Mechatronic Engineering at Stellenbosch University. He is a mechatronic engineer and developed a novel software platform for enabling intelligent control of manufacturing systems in his PhD. He serves as co-leader of the Mechatronics, Automation and Design Research Group, where he has led research into Digital Twins for complex systems and Human-System Integration within the context of the fourth industrial revolution. In recent years, the research group has explored the application of and support for Artificial intelligence in this context to unlock new opportunities.
Organisations need to be able to adopt AI, not only successfully but also responsibly. This requirement is not trivial, as AI can deliver real value to adopters. However, it can also result in severe impacts on humans. AI’s technical capabilities make AI powerful; still, implementing AI in organisations is not limited to the technical elements and requires a more holistic approach.
Artificial intelligence systems that are implemented within organisations are socio-technical systems, with the interplay between social and technical components. Considering the sociotechnical nature of AI in organisations, the following research question arises: From a sociotechnical perspective, how can an organisation increase adoption of AI as part of its quest to become more data-driven? This tutorial will practically display how a sociotechnical artificial intelligence adoption framework (AIAF) was created (following a design science research approach). Furthermore, we will present four platforms that enable the organisation’s AI solutions: the data platform, the digital twin, the AI platform and the platform academy. A deep dive into Digital Twin, Circular Economy and AI, will provide the attendees with some real-life examples of the implementations in the industry.
- A theoretical approach to AI adoption
- A pragmatic approach to AI adoption: Digital Twin, Circular Economy and AI