Safety and Moderation

It's time to talk about something very near and dear to us: the safety of you and your communities.

Safety and Moderation

Heya folks! Jackie here to talk about something very near and dear to me: the safety of you and your communities.

There are a range of roles moderators take on to help run and protect the spaces we occupy online. Sometimes you are the tank in chat shielding users from inappropriate messages, spoilers, and spam. Other times you will be the healer taking note of when community members may be in need of some encouraging words or be sent resources to help them. You could also be a chaotic DPS reminding chat of announcements, managing queues, and collecting information from them for giveaways or votes. Whatever role you are in when you help moderate a community, or multiple communities, we want to provide you with the tools to make your life easier.

We’re all in this together.

In the vein of making your life easier we aim to empower you to protect your community as a team. Rarely are teams of moderators a team of one, and when a stream goes live there is no guarantee that all of the mods will be able to make it to the show or be there the whole time. That is why we are creating a space for mods to view a log of actions, comments, and requests whether they were tuned in at the time or not. With this ability to review what happened while they were gone they can better protect the space while they are there. It will also help them protect the other communities they are a part of as well.

We would be remiss if we treated communities as completely separate entities. We know there is overlap and that sometimes you may need assistance from another alliance to make it to the end. It is for that reason moderators can be prompted to ban users in the other communities they also mod for when they initially ban someone. In the instance where a large number of users are banned, moderators will have the ability to export a list of banned users to determine if action needs to be taken in another channel. When banning these bad actors moderators can put their community at ease knowing they can no longer view the stream or chat.

In addition to this, our reporting system will allow you to easily upload and link evidence documenting the actions of these users. We will take these reports very seriously and update users on the resolution of their report. These things are so important to us because we want content creators and community members to feel as physically, emotionally, and mentally safe as they can be while on Altair. Which is why we are requiring users to sign up with 2FA. This requirement protects your account and personal information, and slows down the process to create new accounts. Slowing down the process may sound counter-intuitive but we do have our reasoning for this. By slowing down bad actors from being able to create an unlimited amount of accounts very quickly it is a step in preventing them from being able to harass and attack communities.

While we are designing and implementing the experiences and tools for Altair we realize that there is ultimately no perfect solution. Since there is no perfect defense we seek to arm you with the knowledge and support to protect yourself. All of our tools will be documented with not only how they work but examples of when and where to use them. In addition to this our resources for content creators’ mental health will include how to cultivate a healthy and protected environment for all. Too often when creators first start out they do not take as many safety precautions because they believe they are “too small” of a creator for anyone to invade their privacy or spaces. Unfortunately, this is not the truth and they are overwhelmed in these situations since they did not prepare their spaces preemptively.

It is for this reason we are starting the conversation now and have been talking through this internally since day 1. Every time we encounter new mechanics that these bad actors will use to harm our safe spaces we want to update our guides and tools to best help you stay safe. Moderation is a constantly evolving process with so many variables.

How do we design moderation features that are easy to learn and navigate?

How do we create moderation tools that can scale as Altair grows?

How do we moderate content in languages we do not understand?

How do we solve the correct issue?

Our brilliant CTO, Cody, shared a really great article with the team when I was getting lost in the sauce trying to figure out moderation features. I had suggested some unsustainable checks that would have killed the performance of the platform and would not have scaled at all. We are lucky though because as we explore how we will handle moderation on Altair we have the experience of being on other moderated platforms. These experiences allow us to learn from the mistakes they made, how they were able to scale their solution, and develop a strong foundation for Altair from the beginning.

Quick intermission - can we get a round of applause for Cody? He is constantly reminding me of reality while I dream of going to the moon with some of these ideas.

These talks that we have internally, which we are sharing with you, are hopefully making you excited for the future of Altair. Our team has been working hard educating each other and ourselves to better design and build your future home. We believe creators and community members should have a platform where their mental, physical, and emotional health is the primary concern.

As we approach being almost a year into this journey together and are getting closer to debuting Altair I ultimately hope that you feel seen and heard by us. Thank you for making it to the end of my ramblings, and thank you for having faith in us.