Enhanced Learning Experiences in the Arts and the Mandate of Learning Technology

Enhanced Learning Experiences in the Arts and the Mandate of Learning Technology

Explore possible future scenarios in the field of Technology Enhanced Learning (in the arts).

The narrative zooms in on two specific technological domains (AI and neuro-technology) and sketches ideas on how technological advances in the upcoming decades might impact learning in higher arts education. These scenarios on their hand expose future challenges related to ethics and the relation with and trust in technology.

The video essay takes you to a future scenario in 2045 in Brussels where technology-enhanced learning has evolved a lot…

 

SHORT MOVIE

NARRATIVE

Enhanced Learning Experiences in the Arts and the Mandate of Learning Technology

A 19th-Century Vision of the Year 2000 - https://publicdomainreview.org/collection/a-19th-century-vision-of-the-year-2000

Written by
Kobe Ardui
Ingwio D’Hespeel
Koenraad Hinnekint

LUCA School of Arts,
2022.

Intro

Today, the term ‘learning technology’ automatically triggers associations such as online classes, MOOCs, Moodle, Miro boards, learning platforms, breakout rooms, wiki’s, facebook groups, flipped classrooms, skillShare, udemy, edX, VR, AR, AI powered learning apps, virtual classrooms, online labs, hybrid learning, …

These are all tools designed to facilitate, enhance, streamline, deliver or optimise learning experiences. In this classroom, we explore a possible future in which technology has advanced in such a way that it is able to actually take control of the whole learning process.

The class zooms in on two specific domains (AI and neurotechnology) and sketches a few ideas on how technological advances in the upcoming decades might impact learning in higher arts education. These scenarios on their hand expose future challenges related to ethics and the relation with – and trust in – technology.

Step1:
Definition

Let’s kick off by trying to get a better understanding of the term ‘Technology Enhanced Learning’ (TEL). IGI Global has quite a complete definition, describing seven perspectives on TEL (IGI Global. n.d.):

  1. The adoption of technology in order to promote a better classroom experience for students or increase e-learning pedagogical activities.

  2. The application of information and communication technologies to teaching and learning for the purpose of motivating and engaging the learner

  3. A learning process (or environment) that is supported by the use of educational, virtual and online communication technologies.

  4. An approach to the provision of distance, blended, and classroom-based learning experiences through the use of a full range of information and communications technologies undertaken by communities of educational researchers, designers, information and communications technologists, and media specialists.

  5. Is a teaching and learning strategy where technology facilitates the efficiency of learning for individuals and groups, providing the transfer and sharing of knowledge in organisations, and understanding of the learning process by exploring connections among human learning, cognition, collective intelligence and technologies, and even artefacts.

  6. States the support of an education and training learning activity or system that is enhanced by technology.

  7. A learning process that is supported by technology.

  8. The support of any learning activity through technology.

Technology Enhanced Learning is a central notion in the business of learning innovation, and the theoretical frameworks that shape this notion come from a vast range of different research and educational contexts. This was additionally fueled by a global pandemic that forced every educator to reflect even more than before about the merits and challenges of online learning, resulting in an enormous increase of online educational material, research about online learning and personal experiences.

In the definitions mentioned above we can discern some patterns:

  1. A focus on the learning process, the engagement of the learner or the enhancement of the classroom experience. This means that while technological innovation can lead to very exciting new possibilities, it always comes back to what it might mean for the learning process of the individual or the class.

  2. Taking learning and teaching together. The focus of TEL will not be on the teaching side of the spectrum, but rather on enhancing the learning experience and the efficiency of these learning experiences. This is important when we zoom in on AI-technology and ask the question what the role is of the human teacher in the future. The relationship between teaching and technology presents itself as an interesting topic with a certain tension: in one definition it is stated: ‘learning process that is supported by technology’, another states ‘the application of information and communication technologies to teaching and learning’. Will technology take over teaching in the future?

  3. Technology in this definition is not restricted. Certain definitions talk about ‘information and communication technology’ but this is still a very broad concept covering the vast TEL landscape from simple interactive apps, over artificial intelligence to brain hacking devices.

  4. An approach to the provision of distance, blended, and classroom-based learning experiences through the use of a full range of information and communications technologies undertaken by communities of educational researchers, designers, information and communications technologists, and media specialists.
  5. Is a teaching and learning strategy where technology facilitates the efficiency of learning for individuals and groups, providing the transfer and sharing of knowledge in organisations, and understanding of the learning process by exploring connections among human learning, cognition, collective intelligence and technologies, and even artefacts.

  6. States the support of an education and training learning activity or system that is enhanced by technology.

  7. A learning process that is supported by technology.

  8. The support of any learning activity through technology.

Ok, now that we have an understanding of what TEL is or can be, let’s try travelling two decades in the future to try and make an educated guess of how technological advances might shape learning in the context of an arts school…

Step2:
Tipping point

“Part of the singularity is that the technology influences the ability of people to accelerate the technology.”

CJ Carr

CJ Carr is an AI specialist, hacker, co-founder of Dadabots and metalhead. He is based in Boston and Sacramento, US. He was interviewed together with his colleague Zack Zukowksi by the FAST45 Knowledge Alliance on the 5th of Mai 2021.

Read the full interview with CJ Carr and  Zack Zukowksi here.

 

 

Read More

Dadabots CJ Carr and Zack Zukowski working on a new death metal album - © Dadabots

Moore’s Law describes the exponential growth of technological ability. The technological singularity referred to in the quote above, is “a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilization” (Wikipedia, n.D.).

CJ’s quote above triggers a lot of questions, some of which we’ll start exploring in this classroom, being just the tip of a very interesting, broad iceberg.

  • Which tech domains might highly impact the way we learn in the future?

  • What does this exponential growth mean for technology enhanced learning?

  • And can we expect a tipping point in which technology not only supports our learning, but actively learns with and/or for us?

  • What exciting opportunities might this bring?

  • To what extent can and/or should we trust technology in assisting our learning process?

The progeny of AI & Neurotech’s marriage

In the same interview, CJ dreams up a future in which the combination of advanced neurotechnology and artificial intelligence opens doors to whole new worlds of creation, exploration and experimentation:

“I’m hoping by then [2045] that we’ll have really advanced brain computer interfaces where we can download the whole sensory and mental experience of what it means to be human. We’ll be able to understand how it can be digitized and accessed, and that means input and output. Let’s assume that this exists, and that you can also do this with dreams. We’ll have artists developing dreams, composing experiences. And we would also have AI models that will generate experiences, just like they’re doing media synthesis right now with music and images. And we’ll have psychonauts and oneironauts that are exploring the latent space of these models to find out ‘what else is there in the human experience?’. So that’s kind of interesting. And I also hope that the human brain becomes hackable enough that we could find ways to rewire it. That might make people have superhuman abilities like higher levels of cognition, higher levels of compassion, high levels of conscious awareness. That would be cool.”

CJ Carr

CJ Carr is an AI specialist, hacker, co-founder of Dadabots and metalhead. He is based in Boston and Sacramento, US. He was interviewed together with his colleague Zack Zukowksi by the FAST45 Knowledge Alliance on the 5th of Mai 2021.

Read the full interview with CJ Carr and  Zack Zukowksi here.

 

 

Read More

What is the “latent space”?

The latent space is a key concept in Machine Learning and AI. This article manages quite well to explain this imagination sparkling term, in a language understandable for humans.

Source: https://towardsdatascience.com/understanding-latent-space-in-machine-learning-de5a7c687d8d

Read More

You might want to read that quote again! Let the words and ideas tickle your brain for a while…

 

(tickle tickle tickle)

 

Ok, let’s dive into some of the ideas and concepts that CJ brings to the table.

 

“…download the whole sensory and mental experience of what it means to be human…”

CJ Carr

This assumption implies that in 2045 neurotechnology will be precise and fast enough to measure all of the brain signals. We know now that the body sends about 11 million signals to the brain per second to our brain. These are just the signals coming in from our eyes, ears, nose, tongue and skin. This number does not include other intra-brain-signals. We know our brain is made up of about 100 billion cells, each of them interconnected with thousands of other brain cells. It is estimated that the brain might be able to execute up to about 100 billion operations per second.

“…and that means input and output…”

CJ Carr

We’re not talking about just ‘reading’ the brain, but ‘writing’ as well. This means that the technology should also be precise and fast enough to send specific high speed signals to all of these brain cells. This concept is highly complicated and should be further explored with neuroscientists as well as experts on the learning process and the personal aspect of learning.

This needs some nuancing: we might not need access to all brain cells and signals. For example, it seems that the conscious mind can process only about 50 bits of the 11 million signals it receives every second. The vast majority of the information that the brain receives is processed un- (or sub-?)consciously. It probably even is a very good idea to entrust some of those unconscious processes – like your breathing and your heartbeat – to the experienced guidance of your brain.

These are just some first considerations. Do you feel it too? How you are getting sucked into the (admittedly super fascinating) ‘what is consciousness?’ rabbit hole. Let’s not go in there right now. Instead, let’s focus on:

“…We’ll have artists developing dreams, composing experiences.…”

CJ Carr

Oh wow! We would be able to design experiences that feel totally real! So instead of experiencing a VR ride on a scary rollercoaster, the experience feels so real that it is indistinguishable from the real experience: the wind in your hair, the G-forces working on your body, your inner ear doing funny stuff with your equilibrium…

Learning is also an experience! Could this mean that we will be able to design artificial experiences in which real learning happens? Real learning in the sense that it has a lasting impact on the learner’s knowledge and skill set? Research on lucid dreaming actually seems to back this hypothesis. At least three scientific studies have described how practicing motoric skills in a lucid dream (like tossing a coin in a cup or throwing darts) actually enhances ‘real world performance’ of these tasks. And isn’t a dream some sort of artificial experience?

Ok, now let’s add some artificial intelligence into the mix!

“…And we would also have AI models that will generate experiences, just like they’re doing media synthesis right now with music and images…”

CJ Carr

You’ve probably seen them before: photographic portraits generated by ai that are indistinguishable from real people. And you might have seen some otherworldly and trippy videos generated by AI, showing morphing uncannily-semi-realistic landscapes?

generated photos” made their business from ai generated portraits.

AI Landscapes by Alexander Rebe

From realistic to dream-like, what these images all have in common is that the AI that generated them was trained on tons of visual material. By training on ‘data sets’, such as thousands of photographs of faces  just to give one example), the AI ‘learns’ what characteristics make up a human face. Having learned this, it can now create new faces from scratch. Needless to say the quality of this output depends on both the quality and the amount of data the AI was trained on.

But AI isn’t limited to generating images only… AI can create music, novels, social media content, websites, ads, game scenarios, sculptures, building layouts, etc… In fact, anything that can be translated to digital data sets can become training material for AI. So, when CJ suggests that by 2045 we might be able to “…download the whole sensory and mental experience of what it means to be human…”, this implies that ‘human experience’ becomes a trainable asset for AI! Which in its turn implies that AI will be able to generate new and unknown human experiences.

Do you feel that too, how this idea is tickling your brain? What kind of interesting learning experiences could it generate? I might learn ideas or skills that no one has ever learned before! What if ‘glitches’ appear in these generated (learning) experiences, what would that be like? And would I even trust learning in this way? Sometimes stuff you’ve learned can be extremely hard, or impossible even to ‘unlearn’ you know…

Step 3:
trust in educational technology

Image: “The Author Depositing his Voice at the Patent-Office, to Prevent Counterfeiting” from Octave Uzanne’s “The End of Books” (1894) Source: https://publicdomainreview.org/collection/octave-uzannes-the-end-of-books-1894

“To say that a tool would be neutral is a lie.”

Emmanuel Verges

Emmanuel Verges is a cultural engineer, co-director of the Observatory of Cultural Policies and director of the Office. Verges was interviewed by the FAST45 Knowledge Alliance on the 28th of April 2021.

Read the full interview with Emmanuel Verges here.

 

 

Read More

Let’s assume a not-so-distant future in which the technology described in the section above has become reality and early adopters (both learners and arts schools) have started integrating it in learning processes.

In this future scenario, now let’s explore the idea of ‘trust’ in relation to these new learning technologies… There will be ‘users’ that might have too much trust, like we experience today for example in the use of social media vs. protecting one’s individual privacy or vs. acceptance of fake news. And we’ll have the opposite as well: people and institutions radically distrusting innovations, even against scientific findings. Remember how in the early days of wikipedia ‘classic encyclopaedia’ were trusted more than crowd sourced information. Or how today even highly educated medics fail to accept that deep learning algorithms are as precise in detecting for example cancer in x-rays than experienced specialists (Liu et al. 2019).

 

Click here to check out a summary of (often comparative) studies.

So, what is ‘the good amount of trust’? Or let’s ask ourselves a more nuanced question: which parts of the learning and teaching process can we entrust to these kinds of technologies, and which are the mandate of humans only?

“And the classic storytelling is never going to go away. Whether it’s a computer generating the story or the human steering the computer, art schools are still going to have peer criticism. Even with this new digital medium, a lot of those traditional things you learned in art school will still be relevant. Just the tools and technology might be different. Instead of a pencil or a paint brush, it just might be a graphics processor.”

Zack Zukowski

Zack Zukowksi is an AI specialist, hacker, co-founder of Dadabots and metalhead based in Boston and Sacramento, US. He was interviewed together with his colleague CJ Carr by the FAST45 Knowledge alliance on the 5th of Mai 2021.

Read the full interview with Databots here.

 

 

Read More

“…It’s about conceptualization and understanding context, which are much more difficult for AI. The technical part of it, making a building or drawing or painting or whatever – that’s easy. That’s the mechanical work. Conceptualization and putting it into the context where the audience will actually understand – that’s the domain which is more dependent on humans.”

Marten Kaevats

Marten Kaevats is an architect, urban planner, and community activist, at the time of the interview he worked as Estonia’s National Digital Advisor. The FAST45 Knowledge Alliance interviewed Marten Kaevats on the 7th of May 2021.

Read More

“If bias is in the data, it will be in the system. There are all these negative examples. You can put a generative, objective AI out in the field and people will then use it to train chatbots talking only nonsense or creepy stuff. This is a problem related to the values of our society because AI has no implemented value. It will do what it has been trained to do. There are activities in machine ethics and related fields. This is still under development. It’s unclear where we will end up. Some people say, “Yes, we need it. We need this part in the autonomous car, which decides, ‘Oh, there might be an accident. Whom should I kill?'” And others argue, “No, that cannot be the case. In case of an accident, for humans, it’s similar. You have one second to react. Nobody knows exactly what’s going on.” So this, for example, is a lot under discussion and related to the development of machine ethics, the avoidance of bias, et cetera, which means that we have to work more with disciplines in human science, legal staff, and not only on the core essence of the mathematics of the algorithms. This might be adept in-depth, but first of all, we have to do this interdisciplinary work to think about how our AI affects certain areas of society and life.”

Stefan Baumann

Stefan Baumann, German Research Center for Artificial Intelligence, Smart Data & Knowledge Services. Stefan Baumann was interviewed by the FAST45 Knowledge Alliance on the 15th of April 2021.

Read More

Before we move to the very end, here’s one last question we would like to pour into your brain for you to ponder over: how can we educate people in order to empower them with enough technological literacy to find answers to these kinds of questions themselves? Let’s serve Tim’s vision on his mission as an educator serves as a fitting finale to help digest this food for thought…

“I would say that my life in 25 years would be best if I could be able to develop a form of teaching that fuses the digital and the physical, and that has answers to what kind of teaching is most effective. And let’s look at the term ‘effective’ not just in terms of the outcome, but also in terms of the personal and psychological development of students. That would be a wonderful goal to have: to develop a form of teaching that empowers young creatives to work curiously with technologies, to be able to question those technologies in an appropriate way, and also to create artifacts and artworks that have an impact in society. Works that show the potential dangers and benefits of new technologies…”

Tim Rödenbröker

Tim Rodenbröker is a creative coder and lecturer in several institutions, based in Germany. The FAST45 Knowledge Alliance interviewed Tim Rodenbröker on the 5th of May 2021.

Read the full interview with Tim Rodenbröker here.

 

 

Read More

Explore futures of the arts and technology in 2045

Get to know the FAST45 Footnotes Summer School's work on DIS/CONTINUITIES - Digital Cultures of Education!

TELL ME MORE

How will art schools of the future look like?

Check out the FAST45 art school futures scenarios!

TELL ME MORE

References

Dive into the research

Discussion

No feedback has been added yet

Share a Thought

Leave a Reply