Simultaneous translation / interpretation

some collectives have started doing simultaneous translations through meet.coop, while BBB isn’t yet tailored for that, there are positive experiences that have designed their social protocol to work in specific ways with meet.coop.

A rather positive experience so far was the CommonsConfluence session during the World Social Forum on Transformative Economies end June, in four languages with one main conference room in English and translated rooms in parallel. The experience was focused on the listeners, and very positive as they didn’t have to change room.

However several collectives have the need to move away from one central language and run sessions where people can intervene in their mother tongue and have the translators translate that into the other languages. This poses some other challenges that could be worked with technical development in BBB and adjusted social protocols. The BBB developers have shown interest as well as several possible funders that might be willing to contribute.

In this thread we copy previous discussions and hope continue the best social protocol in combination with technical development that is needed for varying usecases.

1 Like

I was in a BBB meeting last weekend with simultaneous translation across four languages. It was simply done from the listener’s point of view - a main ‘live room’, three parallel rooms with translation into each of the other three languages, and two browser tabs open - one in the live room and one in the relevant translation room, with audio switched off in whichever tab was currently in the ‘wrong’ language. I don’t know how complicated this was from the organisers’ side but I imagine it wasn’t any harder than it was for me, for the translator to do the equivalent thing, except with the source-language audio switched on.

1 Like

The way it worked in my recent meeting (WSF -World Social Forum) was simple, just required the user to mute or unmute two browser tabs. The meeting was conducted in four languages.

Let’s say the meeting is just two languages . . Two BBB rooms are open, in two browser tabs. Room A is main room, in whatever language a speaker was using. Room B is always, let’s say, Spanish. Room B is silent when a Spanish speaker is in Room A, and translated when another language is in Room A. So a Spanish ‘listener’ can always hang out visually in Room A, see people there, put their hand up in the chat, etc. - and speak when invited And when a non-Spanish speaker comes on in Room A, simply mute the Room A tab, and unmute Room B, where the translator ‘lives’.

Does that sound like what you need? It’s not really switching rooms, just muting the audio on tabs in your browser. I guess a further translation (Spanish>French?) would work the same way with Room C added - though maybe with an awkward lag in Room B > Room C translation.

What I see described in Zoom is a paid service attached to an account. What was done in the WSF meeting simply required a bilingual person sitting in Room B ready to speak in Room B’s audio, ready to translate when the language switched in Room A. As it happens, in the WSF meeting that person was paid by WSF as a service to the meeting. In your case, I guess you might have bilingual folks, members of MayFirst, in the meeting anyway, ready to do that service for compadres? Setting up two simultaneous, independent BBB rooms from greenlight is absolutely easy.

Is this making sense?

1 Like

Possible simultaneous Interpretation is a priority for us to use BBB. I think what you did with the WSF meeting is already really good, but as Jamie mentioned to be able to do interactive meetings going back and forward in different languages we need for the interpreters to be able to switch between rooms easily and for the participants who want to speak to be seen quite quickly that they ask to speack. I wonder if the moderator in the main room can see the hand raising of people in side rooms just in waching the main room.
Our participants are not always very good with technologies and if I ask them to open several rooms/tab, I think I might loose them, but I will be happy to participate to a working group on that, and try the platform with our interpreters to see if there are things that we can improve.
For example if we can get rid of echo test and asking to join headphone/microphone 4 times when you want to join the call with sound, that will facilitate the work of the interpreters to switch rooms more rapidely.
We are using the interpretation feature in zoom and yes it’s an inconveniant that you can’t hear the voice of the other interpreters to be able to interpret from the language of other interpreter, but for the rest it’s quite instinctive and easy to use for participants and interpreters.
Camille de Wit from Friends of the Earth International

2 Likes

All participants are continuously ‘in’ the main room. And also, if they choose, in a translation room. They can raise their hand in either room at any time. Thus being seen in the main room is just a matter of the facilitator establishing a clear signal - like writing “HAND” or “SPEAK” or “@@@” in the public chat.

Switching between hearing a room is just a matter of clicking the audio symbol in a tab in the browser.

Switching between seeing a room (and its public chat) is just a matter of selecting the tab in the browser. A participant with the camera on can be seen on both rooms, and is also ‘seen’ in the participant list in both rooms and the public chat of both rooms.

So it’s just two tabs. Maybe a lot depends on how clear the facilitator is, in the main room (providing url instructions in multiple languages and in the public chat), and how active the translators are, in a facilitator way, in the translation rooms?

This probably does call for a laptop or desktop though? Things may be a whole lot more difficult on a phone (and behaviour of BBB on mobile operating systems/browsers may not be so predictable? I’m not sure.)

echo test and asking to join headphone/microphone 4 times

This tedious business only has to be suffered once, to set up a participant’s presence in a room (ie a connection in a browser tab). Then as long as that tab stays open, no repeat logging in is required. It’s just the audio in the browser that is switched, not the audio/video connection with the BBB server.

[Edit] All credit for this smart routine lies with Monica Garriga and her collaborators, who designed the meeting of FSMET COMMUNeS in the WSF gathering.

1 Like

Thanks for your comments. I have just done a try out on the breakout rooms to see how it works and It didn’t seem to be so easy. Below some issues I could see.

First of all I tried to open several tabs but I didn’t manage as the system was disconnecting me on the other tab. Maybe you have tips to give me on that. Only on the computer where I was moderator, I managed at the end to be in 2 rooms in 2 tabs.
Then concerning the audio, to listen to only one room, I had to leave the audio and when I want to hear the audio again, I needed to connect again to the microphone and do the echotest.
I would love to get feedback from the organizer of the WSF and some advices to set up everything right, maybe to do an other try out with interpreters.

For the interpreters who need to switch languages quickly, they will need to do several clicks : Room 1 put audio and mute, tab Room 2, unmute and leave audio. They can for sure keep both audio on if people don’t speak in the 2 rooms at the same time. For participants who can understand both languages, they will need to switch on and off the audio for each room and it takes time so they will miss the beginning of the speaker. When you decide to join one room, it’s a pity also not to be able to change room when you want without having 2 tabs open.

Thanks for your advices !

Puzzling. I guess we should set up a trial meeting? Four participants - non-English speaker, translator from non-English, facilitator main room, participant (one or both of the latter English only). I’m happy to use my meet.coop rooms to set up.

the organizer of the WSF

Monica is not a member here. But @dvdjaco was a designer of the meeting I was in, I think?

1 Like

Hi,
Should we set up a specific channel on the forum on simultaneous interpretation with BBB to see if others are interested and where we can gather the experience and needs from everyone ? I also contacted the Guerilla Interpreters to see if they want to join a try out call. I will let you know and we can set up a call for next week. Thanks a lot

1 Like

Sounds good to me. If possible please avoid Tuesday and Thursday 1:30-15:00 BST, which seem to have become tims for meet.coop circles. But choose whatever works - and meetings next week might fall diffeently anyway.

Concerning the simultan translation feature I know that there are BigBlueButton providers in Germany that are seeking to fund development of such a feature, I could reach out if there is interest.

From what I know now about the code base of BBB I think this will be quite a complicated project and probably won‘t make it into the main code base fast, but who knows :wink:

I think simultaneous translation is perfectly possible in BBB. as is. One room per translated language. Two tabs open per participant. No surcharge for service. No elaborate code.

Hi,
I could be interested in knowing about this BBB provider to see what they are thinking of.
Thanks for sharing their contacts.

I got some more information: Apparently a couple of organizations are already funding the development of multiple audio tracks for BigBlueButton, together with the main developers. As I understand it this feature will become a part of BigBlueButton 2.3, although it’s hard to say how long this will take. If I get more information I will let you know.

2 Likes

Hi,
Do you have name of organizations participating to this development ?
Thanks a lot

1 Like

Sadly not, but If I get to know more I will let you know!

2 Likes

It looks like some people are working on it. I found this on a forum. Maybe good to contact them. I will try : https://github.com/bigbluebutton/bigbluebutton/issues/2642

2 Likes

Hello @camille and Mike, I am here, not very active, though… and I wasn’t aware of this conversation, sorry.
I think Mike has explained very well the setting we did in the WSF Commons Confluence meeting.
" All participants are continuously ‘in’ the main room. And also, if they choose, in a translation room. They can raise their hand in either room at any time. Thus being seen in the main room is just a matter of the facilitator establishing a clear signal - like writing “HAND” or “SPEAK” or “@@@” in the public chat.
Switching between hearing a room is just a matter of clicking the audio symbol in a tab in the browser.
Switching between seeing a room (and its public chat) is just a matter of selecting the tab in the browser. A participant with the camera on can be seen on both rooms, and is also ‘seen’ in the participant list in both rooms and the public chat of both rooms."
I spoke with the professional translators the day before, set up the 5 rooms, and welcomed one by one each person that arrived in the room before the session started, explained the settings to each of them, and once the session started, we explained it again to everyone, but what was important was to follow a structured pad, which a few found a bit tedious but was helpful to held a real multilingual and work (hands on) session, not a traditional conference with talking heads. Results were collected here: https://www.teixidora.net/wiki/Horitzons_comuns/en#Commons_Confluence.2C_June_2020 Let me know if there is anything else you’d like to know about that experience.
Cheers!

1 Like

In a meeting with @camille and translator colleagues, they demonstrated to me that the arrangements noted above don’t adequately support the work of simultaneous interpretation, in a conversational setting (as distinct from simultaneous translation in a conference setting). I intended to document this, but I realise that I haven’t fully grasped what the difficulty is, apologies. It hinges on the complexity of the visual, audio, textual and linguistic ‘space’ that the translator must operate in?

I think the key might be, for translators to describe what the normal arrangement of channels and equipment and controls and roles might be, in a physical-space, simultaneous interpretation setup? Then it might be possible to rigorously determine whether BBB enables this to be mimicked? So that it feels familiar, to a professional translator.

There were secondary issues too, regarding

  • what controls were visible in different browser environments (eg audio ON/OFF in the browser tab)
  • confusion for folks, switching between ‘tab-rooms’, and
  • issues of bandwidths, devices and fluency in browsers/video-rooms, for participants in settings with reduced affordances (eg participants in Africa, say - using smartphones, on limited or unstable bandwidth).

These are real issues for the FOEI use case, but distinct from the issue discussed in this thread. They are a mixture of tech infrastructure and facilitator protocol?

Hi everyone,

I asked one of our interpreter who tried BBB with @mikemh and me about the exact needs of an interpreter for simultaneous interpretation. He has just sent me an explanation of a normal booth that we can find also in some online platforms that allow simultaneous interpretation like for example zipdx : https://www.zipdx.info/.

See his message below explaining the needs of interpreters :
image

This is a picture of one of the most widely used interpreter consoles, which basically sums up what we need:

A: individual volume for each interpreter - as you can see the console also has bass and treble controls which are useful but no way essential in a remote simultaneous interpreting (RSI) environment.

B: Is a button which enables and disables the interpreter’s mic.

C; Mute or cough button … an extremely useful tool to the interpreter

D: These buttons enable you to listen to the other languages being produced - particularly useful when you need to take what we call relay, that is, if, for example, the speaker is speaking in Chinese and Chinese is not one of your languages, you listen to the production of the booth translating from Chinese into English and from the English you then translate into Spanish (for example).

E: These buttons allow you to choose the outgoing language, in this case the booth would produce either Russian or Italian.

The crescent of buttons on the right are to message the speaker or chair to ask them to: slow down, please repeat, help, etc. (useful also, but not essential, in an RSI environment).

It is vital also for us to be able to hear (at the same time) the original + the production of our boothmate. This is for two things: 1. An important role of the “resting” interpreter is to help the “active” interpreter, writing down figures, helping with acronyms, looking up terminology, references, etc., and particularly to know that you need to take over swiftly if your boothmate gets stuck. 2. To facilitate a smooth handover from one interpreter to the other. To such ends, to be able to see one’s boothmate is extremely useful too.

The other essential thing for us, is to be able to see both the speaker and any presentations that he/she may be using.

Tony

I hope this can help to understand the needs of an interpreter, who doens’t interpret in only one language but need to do for example SP>EN and then EN>SP.

2 Likes

Thanks @camille and thanks to Tony too. This array of controls does look fairly specialised. Is there anything else specialised in the setup? I’m thinking about audio channels - does an interpreter have a special audio mix - maybe, one audio feed in one ear and another in the other?
InBBB with two rooms in separate browser tabs it certainly is possible to hear audio from both rooms simultaneously. But because the control of the audio feed in the browser interface is simply ON/OFF for each tab, it’s not possible to separate the two audio feeds into Left-Right headphone channels.
That would require a custom plugin for the browser I guess.