Background

The Japanese class is rated as for complete beginners. However, I have some awareness of Japan, including some knowledge of the alphabets, phonetics, language syntax and culture.

Lesson 01

The teacher introduced the class to hiragana and explained the phonetics using romaji translations and by pronouncing them with the class. This highlighted an issue for reading and writting for languages with non-latin alphabets: if one is to read in the native language, one must first understand a number of unique and often arbitrary symbols and their phonetics or phonetic patterns. A fairly simple implementation of this could be to annotate characters after a period or on user prompt with a romaji translation, or play a sound file of the correct phonetic.

Phonetics (obviously) play a large part in correctly speaking a language (especially when reading aloud), which made me consider whether it was possible to accurately recognise correct phonetic reproduction from a person in an immersive virtual environment, or at least, to what extent. Google offers an API for understanding spoken words, but not necessarily individual mora. Therefore the use of this technology would be beneficial for conversations but not recognition of mora practice.

Part of the phonetic teaching involved focus on mouth and tongue movements to correctly produce the sound.

I felt anxious when attempting to produce an unfamiliar phonetic sound - any mora beginning with ā€œgā€ that appears in the middle of a phrase. This is supposed to be produced as a combination of 'g' and 'n', produced at the back of the throat with a nasal sound. I found this difficult and even more so when the teacher was attempting one-on-one practice with me in the class.

The teacher made the class draw hiragana symbols in the air for practice at writing. I already had an awareness of the symbols, so it was difficult to gauge the efficacy of this method. I felt more confident in assigning meaning to the characters we practiced, however.

The teacher made us pronounce words as a group. I felt anxious whenever I was unclear on a pronunciation. The energy of the class also felt low when pronouncing out loud. Despite having a knowledge of the words and accurate pronunciations, I did not feel I could access it in this situation. Perhaps this is an example of Monitor theory, where I have learning of the phonetics and their corresponding sounds, but am unable to produce them at-will as if I had acquired them. It would be interesting to find a methodology in a virtual environment that allowed the learner to make this transition between learned and acquired, through varying paced exercises based upon learner skill (n + 1). Or, perhaps, lower anxiety through self-practice could be made through familiarisation in a virtual setting. There is, perhaps, an argument to recreating the classroom itself to lower anxiety in this setting, as compared with creating an immersive Japanese-space.