Milestone and scope discussions

Discussion around our priorities with the goal of establishing aligned milestones that we can capture on our Roadmap page and communicated publicly thru our website.

My thoughts on immediate focus and proposals for our 2020 focus (sent by email before with slight edits):

My concern with fragmenting into multiple instances meet.coop - The Online Meeting Cooperative too early on meet.coop is that it will be bad user experience and make marketing efforts difficult. If meet.coop goes to a single well-monitored instance running on a well resourced machine, the UX is simple and it’ll always work, and it’s easy to tell people about it. If meet.coop becomes a discovery site to BBB instances, all with different features (recording vs. not) running on different hardware maintained by groups using different practices, it’s harder to navigate or to make services reliable and predictable. As a user, I want to download a file but I am presented with 6 mirrors, except I cannot click to find out if it works, I have to book my client meeting for tomorrow and hope my mirror will still work at that time. Any time there is an outage or network issue on any instance, it becomes a “meet.coop doesn’t work”.

This is why at our meeting, the Jan 2021 milestone on meet.coop - The Online Meeting Cooperative for doing eu. uk. ca. subdomains made sense to me. I think that can only work if the instances share configuration management with centralized monitoring, running on an inventory of similar machines that we know work well for BBB. If there are feature differences, those are fetched from config management and clearly presented on website, and if any instance breaks, someone’s at it.

I also think before we dive into geo-specific instances, we should explore loadbalancing GitHub - blindsidenetworks/scalelite: Scalable load balancer for BigBlueButton. (Sept-Dec in Roadmap) if we want to target 50+ people multi-to-multi, this is important, and it will likely affect how we think about geo-specific instances.

I think the “BBB installs as a service for organisations such as schools and colleges” (for orgs with on-prem needs due to data privacy) and automated spin-up of instances with guaranteed resources (for events that can’t risk shared resource) will be the path to sustainability. So I feel that it’s best we stay focused on the single instance accessible on meet.coop domain until Sept, and treat it almost as a dedicated instance during events like Open2020 (since we know usage is otherwise low), and write case studies on it to establish reliability of both BBB and meet.coop’s particular instance.

We can set Sept and Jan milestones as times to re-evaluate path forward. We can for example, announce to all Hypha instance’s users we are consolidating into meet.coop and y’all need to create new accounts. If OC also turns out to generate meaningful recurring income, meet.coop can set up its own OC and it’s easy for Hypha to flow contributions to meet.coop’s OC.

We can discuss more at our next meeting, but I’d caution against bumping the Jan 2021 “instance fragmentation” before we have more clarity on audience, sustainability model, and shared tools and practices.

1 Like

Based on my reading of the Scalelite documentation it is designed to be a front facing service for several back end BBB servers all in the same data centre as the backend servers all have the same network mount for the saved videos (for example using NFS). Of course it might be possible to configure it to use something like Syncthing to replicate files between servers or perhaps Hadoop, however we might want to give people a choice of which physical location to host their meeting in and I don’t believe that Scalelite is designed for this, it is more about automatically selecting servers by load / usage.

It is possible that a single physical location will have to run multiple BBB backends, fronted by Scalelite, if we want to support the requests that want to run 100+ people events. This of course will make maintaining the infra, even at a single physical location, quite expensive. I think we’ll have to wait and see how the current single server instance perform and what requests come in.

1 Like