Stories capture our projects, thoughts, reflections, and highlights. Read about our latest work.
Data & AI for Social Impact
We identify and research trends and put data and AI algorithms to the test – all this with one question in mind: does this stuff actually work? We explore by doing, working with both completely new and more validated innovations.
The digital age creates an ever growing need for people to adjust to the rapid changing world around them. This comes with many challenges. At the Centre for Innovation we develop solutions in a responsible, transparent and above all meaningful way. We believe that technology can be built in a way that ensures privacy and improves equality and fairness. In a time of unprecedented access to data we try to put people first and bring the overwhelming aspect of technology back to a human scale. By doing this we want to help people to better understand the time and place we live in.
Explore ChatGPT’s Potential for Higher Ed: Try Our Hands-On Exercise
AI chatbots are the hot topic of the moment. Their impressive ability to generate human-like text has sparked many discussions focused on the the risk that students might use these new tools to cheat on assignments. At LLInC we acknowledge the challenges — and we’ve shared ways to adjust assignments accordingly. At the same time, we see fantastic potential to use AI chatbots as support tools. With the right prompts, teachers, students, researchers and university staff can use them to work more efficiently and discover new perspectives on topics and tasks. To help you learn more, we’re sharing a hands-on exercise that will help you to quickly spot opportunities and limitations. The exercise was originally written for ChatGPT but can also help you to test out other AI chatbots.
Data Responsibility: The Difficulties of Achieving Transparency
It is no secret that in the realm of data responsibility, there is much debate about transparency and how to achieve it. What does true transparency look like? How do you communicate about data to ensure data subjects understand what is happening? And in the end, how does this relate to the legal obligations around informed consent, transparency and the ability of users to protect their privacy?