<img height="1" width="1" style="display:none" alt="" src="https://www.facebook.com/tr?id=367542720414923&amp;ev=PageView&amp;noscript=1">

    Not Found

  • 08:00

    Registration & Light Breakfast

  • 09:00

    Welcome Note

    Arrow
  • Improving Deep Learning

  • 09:15

    From Open-Endedness to AI

    Arrow
  • 09:40
    Huma Lodhi

    Deep Learning Techniques to Improve the Football Viewing Experience

    Huma Lodhi - Lead Machine Learning Engineer - Sky

    Arrow

    Computer Vision Meets Deep Learning: A Sports Fan Perspective

    Sports played an important role in the development of Artificial Intelligence and Machine Learning. Chess has been an attractive application domain since the early days of AI due to intelligence and reasoning required to play this. Recently there has been an interest in applying AI and more specifically deep learning techniques for generating solutions for tasks ranging from interesting event detection to predicting results to enhance viewer’s experience and increase their engagement. This talk will give an overview of novel methodologies based on deep learning and computer vision for sports from a viewer’s perspective.

    Huma Lodhi is the Lead Machine Learning Engineer at Sky. She has over 15 years of experience in Artificial Intelligence & Machine Learning across both industry and academia. She is an accomplished expert with hands on experience in development and application of Deep Learning, Kernel Methods, Relational Learning and Ensemble Methods for areas ranging from insurance to health care. She has a PhD in Machine Learning from university of London. She is a co-editor of two books and has published many research articles in leading AI & Machine Learning journals and conferences.

  • 10:05

    ANML - Learning to Continually Learn

    Arrow
  • 10:30

    Coffee & Networking Break

  • Tools For Deep Learning

  • 11:00
    Ira Ktena

    Graph Representation Learning in Healthcare and Beyond

    Ira Ktena - Senior Researcher - DeepMind

    Arrow

    Graph Representation Learning in Health Applications and Fairness Considerations

    Recent work on neuroimaging has demonstrated significant benefits of using population graphs to capture non-imaging information in the prediction of neurodegenerative and neurodevelopmental disorders. This has been enabled by advances in the field of graph representation learning. The non-imaging attributes may contain demographic information about the individuals, but also the acquisition site, as imaging protocols and hardware might significantly differ across sites in large-scale studies. This talk will give an overview of the advances that graph representation learning has contributed to the fields of neuroimaging and connectomics in recent years. It will also discuss fairness considerations that arise when these models leverage sensitive attributes.

    Ira is a Senior Researcher at DeepMind working on Machine Learning research for Life Sciences with Danielle Belgrave and the Deep Learning team. Previously, she was a senior Machine Learning Researcher with the Cortex Applied Research team at Twitter UK, focusing on real-time personalisation while she carried out research at the intersection of recommender systems and algorithmic transparency. Her exploration on algorithmic amplification of political content on Twitter was featured by the Economist and the BBC, among others.

  • 11:25
    Michael Bronstein

    Physics-Inspired Models for Deep Learning on Graphs

    Michael Bronstein - Head of Graph Learning Research / DeepMind Professor of AI - Twitter / Oxford University

    Arrow

    Physics-Inspired Models for Deep Learning on Graphs

    The message-passing paradigm has been the “battle horse” of deep learning on graphs for several years, making graph neural networks a big success in a wide range of applications, from particle physics to protein design. From a theoretical viewpoint, it established the link to the Weisfeiler-Lehman hierarchy, allowing to analyse the expressive power of GNNs. I argue that the “node and edge-centric” mindset of current graph deep learning schemes imposes insurmountable limitations that obstruct future progress in the field. As an alternative, I propose physics-inspired “continuous” learning models that open up a new trove of tools from the fields of differential geometry, algebraic topology, and differential equations so far largely unexplored in graph ML.

    Michael Bronstein is the DeepMind Professor of AI at the University of Oxford and Head of Graph Learning Research at Twitter. His research interests are primarily in geometric deep learning and graph ML. His work in these fields appeared in the international press and was recognised by multiple awards. Michael is a Member of the Academia Europaea, Fellow of IEEE, IAPR, BCS, and ELLIS, ACM Distinguished Speaker, and World Economic Forum Young Scientist. In addition to his academic career, he is a serial entrepreneur and founder of multiple startup companies, including Novafora, Invision (acquired by Intel in 2012), Videocites, and Fabula AI (acquired by Twitter in 2019).

  • 11:50

    Machine Learning Systems Design

    Arrow
  • 12:15
    Edward Johns

    Humans Teaching Robots: The Future of Deep Learning in the Physical World

    Edward Johns - Director of the Robot Learning Lab / Head of Robot Learning - Imperial College London / Dyson

    Arrow

    Humans Teaching Robots: The Future of Deep Learning in the Physical World

    Deep learning has proven to be astonishingly powerful for software-based AI. But what about the physical world? Could we use deep learning to train robots? In this talk, I will present my vision for the future of everyday robots learning everyday tasks, through interactions with everyday humans. In particular, I will describe the progress being made in the field of imitation learning – robots learning from human demonstrations – in the Robot Learning Lab at Imperial College London. I believe that robotics will perhaps change our lives more dramatically than any other area of AI, and that deep learning will be a key part of that future.

    Dr Edward Johns is the Director of the Robot Learning Lab at Imperial College London. After receiving a BA and MEng from Cambridge University, and a PhD from Imperial College, he was a founding member of the Dyson Robotics Lab at Imperial College in 2016. In 2017, he was awarded a prestigious Royal Academy of Engineering Research Fellowship, and then in 2018 he founded the Robot Learning Lab. In a part-time capacity, he is also Head of Robot Learning at Dyson, and an advisor for a number of start-ups. Edward's research lies at the intersection of robotics, computer vision, and machine learning.

    Dr Edward Johns is the Director of the Robot Learning Lab at Imperial College London, where he is also a Senior Lecturer (Associate Professor) and Royal Academy of Engineering Research Fellow. His work lies at the intersection of robotics, computer vision, and machine learning.

  • 12:40

    Lunch

  • Reinforcement Learning

  • 13:40

    The Power of Large scale RL and Generative Models

    Arrow
  • Computer Vision

  • 14:05
    original (40)-1

    Machine Vision at Shell

    Xin Wang - Machine Vision Manager - Shell

    Arrow

    Machine Vision at Shell

    Shell is moving rapidly on digital transformation, in which AI plays a key role. In Shell we are developing and delivering a lot of computer vision projects/products. For this presentation, we will demo the latest development of Computer Vision in Shell and highlight some Computer Vision projects. All these developments are bringing values to Shell.

    Xin Wang was graduated from Delft University of Technology in 2015 with PhD thesis titled “Active Vision for Humanoid Robots”. Afterwards, she joined Shell and established Machine Vision team. Now she is Machine Vision manager and leads a team to deliver Machine Vision products to business. She has great passion in AI not only limited to Machine Vision, but also in the areas of Machine Learning, Natural Language Processing. In her spare time, she is active in teaching robotics to kids.

  • Siva Chamarti

    Machine Vision at Shell

    Siva Chamarti - Head of Machine Learning - Shell

    Arrow

    Machine Vision at Shell

    Shell is moving rapidly on digital transformation, in which AI plays a key role. In Shell we are developing and delivering a lot of computer vision projects/products. For this presentation, we will demo the latest development of Computer Vision in Shell and highlight some Computer Vision projects. All these developments are bringing values to Shell.

    Siva Chamarti is an inspiring leader with in-depth knowledge in developing and deployment of AI on Edge with special interest in computer vision. At Shell, he is heading AI/ML Engineer team with specialist skills in scaling up AI products. At Shell, he helped in building Shell.AI platform which helps in accelerating AI application development and scaling up the projects.

  • 14:30

    Unsupervised Learning in Computer Vision

    Arrow
  • 14:55

    Coffee & Networking Break

  • Natural Language Processing

  • 15:30

    Natural Language Processing for Deep Learning

    Arrow
  • 15:55

    On-device Neural Networks for Natural Language Processing

    Arrow
  • 16:20

    PANEL: Addressing the Implications of Autonomous Systems

    Arrow
  • 17:00

    Networking Reception

  • 18:00

    End of Day 1

    Not Found

  • 08:00

    Doors Open & Light Breakfast

  • 09:00

    Welcome Note

    Arrow
  • Generative Models

  • 09:10

    Do Deep Generative Models Know What They Don't Know

    Arrow
  • 09:35

    Enabling World Models via Unsupervised Representation Learning of Environments

    Arrow
  • Responsible AI

  • 10:00
    Detlef Nauck

    Implementing a Company-wide Framework for Responsible AI

    Detlef Nauck - Head of AI & Data Science Research - BT

    Arrow

    How to Play Fair

    Any AI model that we build from data will amplify bias that is hidden in the training data or introduced later in the model building and deployment process. Building fair AI models is not a matter of measuring one of the many fairness metrics. Fairness is a complex issue and cannot be left to an individual data scientist. Every organisation running AI needs to develop fairness monitoring that ranges from data quality management to detecting and taking out of operation models that have gone rogue. By now we should no longer be seeing biased AI and still new stories about rogue AI emerge all the time. With AI regulation expected to become law in the near future and procedures for AI audits being developed now, the time for the AI industry to demonstrate robust fairness assurance is now. Detlef Nauck is the Head of AI & Data Science Research for BT’s Applied Research Division located at Adastral Park, Ipswich, UK. Detlef has over 30 years of experience in AI and Machine Learning and leads a programme spanning the work of a large team of international researchers who develop capabilities underpinning modern AI systems.

    A key part of Detlef’s work is to establish best practices in Data Science and Machine Learning for conducting data analytics professionally and responsibly. Detlef has a keen interest in AI Ethics and Explainable AI to tackle bias and to increase transparency and accountability in AI.

    Detlef is a computer scientist by training and holds a PhD and a Postdoctoral Degree (Habilitation) in Machine Learning and Data Analytics. He is a Visiting Professor at Bournemouth University and has published 3 books, over 120 papers, and holds over 20 AI patents.

  • 10:25

    Coffee & Networking Break

  • 10:55
    Toju Duke

    Responsible AI at Google

    Toju Duke - Program Manager - Responsible AI - Google

    Arrow

    Responsible AI at Google

    AI is fundamental, groundbreaking technology with an adoption rate of 64% year over year. Along with its innovative and transformational abilities comes the challenges it faces with regards to ethics and responsibility. If AI/ML systems are developed without responsible and ethical frameworks, they have the propensity to deploy harm amongst individuals within society. It is the responsibility of every organisation developing AI models, to have a Responsible framework they adhere to which is accountable, fair, transparent and safe. In this talk, you’d learn how Google approaches Responsible AI, best practices for AI frameworks and relevant case studies.

    Toju is a Responsible AI Program Manager at Google, with over 15 years experience spanning across Advertising, Retail, Not-For Profits and Tech. She designs Responsible AI programs focused on the development and implementation of Responsible AI frameworks amongst Google’s product areas, with a focus on Foundation Models, Natural Language Processing, and Generative Language Models. With a proven track record on business success and project management, she is a Manager for Women in AI Ireland, Tech start-ups Mentor, and a Business Advisor. Toju is a public speaker and advocates for transparent and bias free AI aimed at reducing systemic injustices and furthering equality. She is also the founder of VIBE, a women's community focused on personal and professional development using the underlying principles of emotional intelligence.

  • 11:20
    Emmanuel Ferreyra Olivares

    Reliable AI Models: How to Deal With the Unknown?

    Emmanuel Ferreyra Olivares - Principal Researcher - Fujitsu Research of Europe

    Arrow

    Reliable AI Models: How to Deal With the Unknown?

    Improving the reliability and robustness of modern AI models have received close attention lately due to its importance for critical applications. A significant challenge relates to the lack of indications from unknown data. As a result, models can produce an overconfident prediction on unseen inputs leading to erroneous model outcomes. This situation impacts model reliability resulting in compromises concerning security, monetary, competitiveness and eventually trust. Out-of-Distribution (OOD) research has focused on mitigating this problem by enabling detection techniques covering different fronts of this issue. This talk introduces the OOD concept and its implications on models’ security. Subsequently, it highlights relevant use cases in which the impact of OOD can be observed. Finally, the presentation concludes by pointing out the latest advances and the shortcomings still to be solved in future research directions.

    Emmanuel Ferreyra Olivares, PhD, is an AI & Data Security researcher with Fujitsu Research of Europe. In this role, Emmanuel is involved in designing and developing reliable and secure data-driven AI solutions relevant for highly regulated industries by practising close collaboration with global partners both in the industry and academia. Emmanuel’s research interests are in the broad areas of Cyber Security, Explainable AI, Computational Intelligence and Smart Simulation.

  • Applied Deep Learning

  • 11:45
    Stephen O'Farrell

    BuzzWords - How Bumble does Multilingual Topic Modelling at Scale

    Stephen O'Farrell - Machine Learning Scientist - Bumble

    Arrow

    BuzzWords - How Bumble does Multilingual Topic Modelling at Scale

    With the abundance of free-form text data available nowadays, topic modelling has become a fundamental tool for understanding the key issues being discussed online. We found the state-of-the-art topic modelling libraries either too naive or too slow for the amount of data a company like Bumble deals with, so we decided to develop our own solution. BuzzWords runs entirely on GPU using BERT-based models - meaning it can perform topic modelling on multilingual datasets of millions of data points, giving us significantly faster training times when compared to other prominent topic modelling libraries

    Stephen O’ Farrell is a machine learning scientist at Bumble, where, as a member of the Integrity & Safety team, he works to ensure user safety across all of Bumble’s platforms. His work generally deals with NLP and Computer Vision tasks - deploying deep learning models at scale across the organisation. He graduated with an MSc in Data Science and BSc in Computational Thinking, both from Maynooth University, Ireland

  • 12:10

    Machine Learning: A New Approach to Drug Discovery

    Arrow
  • 12:25

    Lunch

  • 13:25
    Petros Ypsilantis

    NLP for Regulatory Compliance

    Petros Ypsilantis - AI & Machine Learning Lead - JP Morgan

    Arrow

    BuzzWords - How Bumble does Multilingual Topic Modelling at Scale

    With the abundance of free-form text data available nowadays, topic modelling has become a fundamental tool for understanding the key issues being discussed online. We found the state-of-the-art topic modelling libraries either too naive or too slow for the amount of data a company like Bumble deals with, so we decided to develop our own solution. BuzzWords runs entirely on GPU using BERT-based models - meaning it can perform topic modelling on multilingual datasets of millions of data points, giving us significantly faster training times when compared to other prominent topic modelling libraries

    Stephen O’ Farrell is a machine learning scientist at Bumble, where, as a member of the Integrity & Safety team, he works to ensure user safety across all of Bumble’s platforms. His work generally deals with NLP and Computer Vision tasks - deploying deep learning models at scale across the organisation. He graduated with an MSc in Data Science and BSc in Computational Thinking, both from Maynooth University, Ireland

  • 13:50

    Towards Self-supervised Curious Robots

    Arrow
  • 14:05
    Hastagiri Vanchinathan

    Billion Scale Recommendations at Sharechat and Moj

    Hastagiri Vanchinathan - Senior Director of AI - ShareChat

    Arrow

    Billion Scale Recommendations at Sharechat and Moj

    Content marketplaces (like Sharechat, Moj, Instagram, Tiktok) face unique challenges in recommending content to their users. In addition to traditional end user metrics, these recommender systems will also have to care heavily about fairness and equity of the content creators. The volume of content that is uploaded to these platforms far exceeds the traditional commerce, music or movies use cases by orders of magnitude. For instance, on Moj the number of uploads per hour by our content creators is approximately equal to the total number of content pieces (including individual episodes) on Netflix historically. Making matters harder on short video platforms such as Moj, the average length of a content piece is around 15-20 secs while the average session lasts more than 30 minutes. This means that traditional recommender systems that care purely about getting the top 5-10 recommendations absolutely correct are not going to work very well in this case as we need to continue to maintain relevance and interest well into the top 200-300 recommendations. In this talk, I will give a brief overview of the AI journey at Sharechat and Moj - the number 1 Indian content marketplace platform. I will present some of the research challenges we are solving along with techniques and results that worked for us. I will also talk about some of the key decisions that we took along the way that helped us scale up AI org and its efficiency. The talk will have a mix of technical, research and strategic discussion points in our journey so far.

    Hastagiri is the Senior Director of AI at ShareChat, India's largest AI powered content ecosystem driven largely by feed personalisation, automated content understanding and improvements in camera and creator tools!

  • 14:30

    PANEL: Deep Learning for Good

    Arrow
  • 15:00

    End of Summit