• Robloxは、幼いユーザーを保護するために設計されてきたが、成長するユーザー層にも対応している。
  • Robloxはテキスト、音声、ビジュアル、3Dモデル、コードを通じて、多面的なAIソリューションを採用している。
  • 業界全体において安全性を向上させるため、Robloxはオープンソースの利用、パートナーとの協力、法律への支援を行っている。
  • Robloxは創設以来安全性と礼儀を重視しており、新機能の導入前からコミュニティを潜在的なリスクから守る取り組みを行っている。

Scaling Safety and Civility on Roblox

Roblox has always been designed to protect our youngest users; we are now adapting to a growing audience of older users. With text, voice, visuals, 3D models, and code, Roblox is in a unique position to succeed with multimodal AI solutions. We improve safety across the industry wherever we can, via open source, collaboration with […]
The post Scaling Safety and Civility on Roblox appeared first on Roblox Blog.

Scaling Safety and Civility on Roblox April 4, 2024 by Matt Kaufman, Chief Safety Officer Product & Tech Roblox has always been designed to protect our youngest users; we are now adapting to a growing audience of older users. With text, voice, visuals, 3D models, and code, Roblox is in a unique position to succeed with multimodal AI solutions. We improve safety across the industry wherever we can, via open source, collaboration with partners, or support for legislation. Safety and civility have been foundational to Roblox since its inception nearly two decades ago. On day one, we committed to building safety features, tools, and moderation capabilities into the design of our products. Before we launch any new feature, we’ve already begun thinking about how to keep the community safe from potential harms. This process of designing features for safety and civility from the outset, including early testing to see how a new feature might be misused, helps us innovate. We continually evaluate the latest research and technology available to keep our policies, tools, and systems as accurate and efficient as possible. When it comes to safety, Roblox is uniquely positioned. Most platforms began as a place for adults and are now retroactively working to build in protections for teens and children. But our platform was developed from the beginning as a safe, protective space for children to create and learn, and we are now adapting to a rapidly growing audience that’s aging up. In addition, the volume of content we moderate has grown exponentially, thanks to exciting new generative AI features and tools that empower even more people to easily create and communicate on Roblox. These are not unexpected challenges—our mission is to connect a billion people with optimism and civility. We are always looking at the future to understand what new safety policies and tools we’ll need as we grow and adapt.  Many of our safety features and tools are based on innovative AI solutions that run alongside an expert team of thousands who are dedicated to safety. This strategic blend of experienced humans and intelligent automation is imperative as we work to scale the volume of content we moderate 24/7. We also believe in nurturing partnerships with organizations focused on online safety, and, when relevant, we support legislation that we strongly believe will improve the industry as a whole.  Leading with AI to Safely Scale  The sheer scale of our platform demands AI systems that meet or top industry-leading benchmarks for accuracy and efficiency, allowing us to quickly respond as the community grows, policies and requirements evolve, and new challenges arise. Today, more than 71 million daily active users in 190 countries communicate and share content on Roblox. Every day, people send billions of chat messages to their friends on Roblox. Our Creator Store has millions of items for sale—and creators add new avatars and items to Marketplace every day. And this will only get larger as we continue to grow and enable new ways for people to create and communicate on Roblox. As the broader industry makes great leaps in machine learning (ML), large language models (LLMs), and multimodal AI, we invest heavily in ways to leverage these new solutions to make Roblox even safer. AI solutions already help us moderate text chat, immersive voice communication, images, and 3D models and meshes. We are now using many of these same technologies to make creation on Roblox faster and easier for our community.  Innovating with Multimodal AI Systems By its very nature, our platform combines text, voice, images, 3D models, and code. Multimodal AI, in which systems are trained on multiple types of data together to produce more accurate, sophisticated results than a unimodal system, presents a unique opportunity for Roblox. Multimodal systems are capable of detecting combinations of content types (such as images and text) that may be problematic in ways that the individual elements aren’t. To imagine how this might work, let’s say a kid is using an avatar that looks like a pig—totally fine, right? Now imagine someone else sends a chat message that says “This looks just like you! ” That message might violate our policies around bullying.  A model trained only on 3D models would approve the avatar. And a model trained only on text would approve the text and ignore the context of the avatar. Only something trained across text and 3D models would be able to quickly detect and flag the issue in this example. We are in the early days for these multimodal models, but we see a world, in the not too distant future, where our system responds to an abuse report by reviewing an entire experience. It could process the code, the visuals, the avatars, and communications within it as input and determine whether further investigation or consequence is warranted.  We’ve already made significant advances using multimodal techniques, such as our model that detects policy violations in voice communications in near real time. We intend to share advances like these when we see the opportunity to increase safety and civility not just on Roblox but across the industry. In fact, we are sharing our first open source model, a voice safety classifier, with the industry.  Moderating Content at Scale At Roblox, we review most content types to catch critical policy violations before they appear on the platform. Doing this without causing noticeable delays for the people publishing their content requires speed as well as accuracy. Groundbreaking AI solutions help us make better decisions in real time to help keep problematic content off of Roblox—and if anything does make it through to the platform, we have systems in place to identify and remove that content, including our robust user reporting systems.  We’ve seen the accuracy of our automated moderation tools surpass that of human moderators when it comes to repeatable, simple tasks. By automating these simpler cases, we free up our human moderators to spend the bulk of their time on what they do best—the more complex tasks that require critical thinking and deeper investigation. When it comes to safety, however, we know that automation cannot completely replace human review. Our human moderators are invaluable for helping us continually oversee and test our ML models for quality and consistency, and for creating high-quality labeled data sets to keep our systems current. They help identify new slang and abbreviations in all 16 languages we support and flag cases that come up frequently so that the system can be trained to recognize them.  We know that even high-quality ML systems can make mistakes, so we have human moderators in our appeals process. Our moderators help us get it right for the individual who filed the appeal, and can flag the need for further training on the types of cases where mistakes were made. With this, our system grows increasingly accurate over time, essentially learning from its mistakes.Most important, humans are always involved in any critical investigations involving high-risk cases, such as extremism or child endangerment. For these cases, we have a dedicated internal team working to proactively identify and remove malicious actors and to investigate difficult cases in our most critical areas. This team also partners with our product team, sharing insights from the work they are doing to continually improve the safety of our platform and products. Moderating Communication Our text filter has been trained on Roblox-specific language, including slang and abbreviations. The 2.5 billion chat messages sent every day on Roblox go through this filter, which is adept at detecting policy-violating language. This filter detects violations in all the languages we support, which is especially important now that we’ve released real-time AI chat translations.  We’ve previously shared how we moderate voice communication in real time via an in-house custom voice detection system. The innovation here is the ability to go directly from the live audio to having the AI system label the audio as policy violating or not—in a matter of seconds. As we began testing our voice moderation system, we found that, in many cases, people were unintentionally violating our policies because they weren’t familiar with our rules. We developed a real-time safety system to help notify people when their speech violates one of our policies. These notifications are an early, mild warning, akin to being politely asked to watch your language in a public park with young children around. In testing, these interventions have proved successful in reminding people to be respectful and directing them to our policies to learn more. When compared against engagement data, the results of our testing are encouraging and indicate that these tools may effectively keep bad actors off the platform while encouraging truly engaged users to improve their behavior on Roblox. Since rolling out real-time safety to all English-speaking users in January, we have seen a 53 percent reduction in abuse reports per daily active user, when related to voice communication. Moderating Creation For visual assets, including avatars and avatar accessories, we use computer vision (CV). One technique involves taking photographs of the item from multiple angles. The system then reviews those photographs to determine what the next step should be. If nothing seems amiss, the item is approved. If something is clearly violating a policy, the item is blocked and we tell the creator what we think is wrong. If the system is not sure, the item is sent to a human moderator to take a closer look and make the final decision. We do a version of this same process for avatars, accessories, code, and full 3D models. For full models, we go a step further and assess all the code and other elements that make up the model. If we are assessing a car, we break it down into its components—the steering wheel, seats, tires, and the code underneath it all—to determine whether any might be problematic. If there’s an avatar that looks like a puppy, we need to assess whether the ears and the nose and the tongue are problematic.  We need to be able to assess in the other direction as well. What if the individual components are all perfectly fine but their overall effect violates our policies? A mustache, a khaki jacket, and a red armband, for example, are not problematic on their own. But imagine these assembled together on someone’s avatar, with a cross-like symbol on the armband and one arm raised in a Nazi salute, and a problem becomes clear.  This is where our in-house models differ from the available off-the-shelf CV models. Those are generally trained on real-world items. They can recognize a car or a dog but not the component parts of those things. Our models have been trained and optimized to assess items down to the smallest component parts.  Collaborating with Partners We use all the tools available to us to keep everyone on Roblox safe—but we feel equally strongly about sharing what we learn beyond Roblox. In fact, we are sharing our first open source model, a voice safety classifier, to help others improve their own voice safety systems. We also partner with third-party groups to share knowledge and best practices as the industry evolves. We build and maintain close relationships with a wide range of organizations, including parental advocacy groups, mental health organizations, government agencies, and law enforcement agencies. They give us valuable insights into the concerns that parents, policymakers, and other groups have about online safety. In return, we are able to share our learnings and the technology we use to keep the platform safe and civil. We have a track record of putting the safety of the youngest and most vulnerable people on our platform first. We have established programs, such as our Trusted Flagger Program, to help us scale our reach as we work to protect the people on our platform. We collaborate with policymakers on key child safety initiatives, legislation, and other efforts. For example, we were the first and one of the only companies to support the California Age-Appropriate Design Code Act, because we believe it’s in the best interest of young people. When we believe something will help young people, we want to propagate it to everyone. More recently, we signed a letter of support for California Bill SB 933, which updates state laws to expressly prohibit AI-generated child sexual abuse material.  Working Toward a Safer Future This work is never finished. We are already working on the next generation of safety tools and features, even as we make it easier for anyone to create on Roblox. As we grow and provide new ways to create and share, we will continue to develop new, groundbreaking solutions to keep everyone safe and civil on Roblox—and beyond.  Recommended Avatars and Identity in the Metaverse, Part 1The Tech Stack for the Metaverse2022 Year In Review – Letter from our CEOSemantic Subtyping in Luau

全文表示

ソース:https://blog.roblox.com/2024/04/scaling-safety-civility-roblox/

「ROBLOX」最新情報はこちら ROBLOXの動画をもっと見る
セルランの推移をチェックしましょう
4月4日
前日の様子
ゲームセールス:64位
総合セールス:91位
無料ランキング:21位
24'4月5日(金曜日)
記事掲載日
※1日の最高順位
ゲームセールス:64位
総合セールス:93位
無料ランキング:21位
セールスランキング上位
※古参ユーザーも要チェック!
売上が好調。お得なセール期間などの可能性も。普段の順位より高ければ課金するタイミング。
無料ランキング上位
※初心者にオススメ!
話題性もあり新規またはアクティブユーザーが多い。リセマラのチャンスの可能性も高くサービス開始直後や 期間などのゲームも多い。
サービス開始日 2012年12月11日
何年目? 4358日(11年11ヶ月)
周年いつ? 次回:2024年12月11日(12周年)
【12周年あと25日】
アニバーサリーまで あと25日
ハーフアニバーサリー予測 2025年6月11日(12.5周年)
あと207日
運営 Roblox Corporation
ROBLOX情報
ROBLOXについて何でもお気軽にコメントしてください(匿名)

Roblox(ロブロックス)は、無料でカンタンにダウンロードできる制作・交流型のバーチャル空間プラットフォームです。世界中のクリエーターが制作・投稿した膨大な数のバーチャル空間で友達と交流したりしながら楽しみましょう。 バーチャル空間が何百万本もラインナップ Robloxのバーチャル空間には、何百万通りの楽しみ方があります。例えば... ・話題の映画やテレビ番組の公式ワールドを探検 ・バーチャルコンサート鑑賞 ・一流ファッションブランドのアパレル試着 ・ホラー空間で肝試し ・eスポーツでみんなと試合したり、格闘ゲームや障害物アスレに挑戦 ・世界の都市で観光体験 ・アバターになったミュージシャンやタレントと交流 …など、盛りだくさんです。 みんなとつながれます ・パソコン、モバイル、Xbox One、VRヘッドセットなど、ほとんどの端末環境で動作。 ・友達と同じ端末を使っていなくても一緒に楽しめます。 ・テキストと音声チャット機能とプライベートメッセージ機能つき。 ・グループ機能で同じ趣味や推しの世界規模のコミュニティとつながれます。 ・アバターとして通話できる通信機能、Roblox Connect(ロブロックス・コネクト)搭載。 なりたい自分になれちゃいます ・アバターをカスタマイズして自分らしくコーディネート。 ・ブランドや企業のバーチャル空間内でバーチャル商品の購入もできます。 ・語学や教育に特化したバーチャル空間でスキルアップも。  ・プログラミングやデザインを学べるツールも満載。 ・アイテムやバーチャル空間を制作して、クリエーターデビューもできます。 オリジナルのバーチャル空間制作: https://www.roblox.com/develop サポート: https://help.roblox.com お問い合わせ: https://corp.roblox.com/contact/ プライバシーポリシー: https://en.help.roblox.com/hc/ja/articles/115004630823 保護者ガイド: https://corp.roblox.com/parents/ 利用規約: https://www.roblox.com/info/terms ご注意: 参加するには、ネットワーク接続が必要です。Robloxは、Wi-Fiに接続した状態での使用が最適です。Roblox上の機能やコンテンツの一部は、お住まいの国や地域、お使いの言語ではご利用いただけない場合があります。
全文表示
暴言を言ってしまう子がいたりします これは謎のバクなのかハッキングかわかりませんが 課金したらロバックスが何もしていないのに減っていました。 できればここを改造して欲しいです 他以外はとてもいいと思います。 だがろロバをかけて買ったものがインベントリになくて 似た商品を買ったらその前に買ったアイテムが出てきて ロバックスを損した気分になりました。 2回言いますが、他以外は神だと思います。 (★5)(24/11/13)
何もしてないのに通報されるのが出来なくなってほしい みんなで仲良く遊ばないといけないルールができて欲しい ロバックスが一日一個もらえるルールができてほしい 人を大切にしないといけないルールができて欲しい (★5)(24/11/13)
このゲームをやりはじめてから暇な時とかいつでもやってました!アプリは一つのはずなのにその中にもいろんなゲームがあるなんてすごいと思いました😍ですが不便な点もありました 1,視点が動かないなどバグがあること 2,謎のエラーコードが出てくる事(インターネットにつながっているのに通信エラーが出てくるなど) 3,チートが使える事 これらの点が私は気に入らないです。 チートが使える事をだいたい詳しく言うと動画なのでチートを使っているのをみることがあるのでできたらチートだけわ治してほしいと思っています。 長文失礼しました。 (★4)(24/11/13)
roblox is very fun i love it now please i want to buy a robux so please i want to go to roblox to buy robux (★5)(24/11/13)
たまに、ばんされるからって評価を落としている(?)人がいますけど、それは皆さんのためだと私は思います、理由は、とても不満なこと、仲間はずれにするなどの嫌なことがあったら、ばんがなかったら何もできないからですたくさんの人が楽しく遊んでるんですから切断されても私は何も思いません 詐欺されたらイラついたり悲しくなったりすることもあるけど、けどそれを人にぶつけることじゃないよね?ね?課金もですもらえなかったからって評価を落とすのは違うと思います、ロブロックスを作った人は、私以上に大変なんですよ?!人気持ちを考えて!詐欺だけじゃなく、切断、ばん、詐欺全部全部ロブロックスを作った人ぶつけることじゃないと思う 私はこのゲームを友達に教えてもらってとっても楽しかったです! たくさんのゲームがあるし、ゲームも作れるところがいいと思います!あと、個人的に寄付ゲームがあるのもいいと思いました! 理由、お母さんに課金ダメと言われている人もいるだろうから、寄付ゲームがあってほんとによかったです!あとは自分の服やグループ、アバターを作れるところもいいと思います!あと、日本人だけじゃなく、韓国の人英語人の人みんなが入れるのがいいと思います!ガチでこのゲーム大好きすぎ!リア友教えてくれてありがとう! (★5)(24/11/13)
ロブロックスはさまざまなゲームがあって楽しいですよねけれど,口答えが多くなってる気がしますあとあらしやなんでそうゆうことが起こるんでしょう名前さまざまな出てくる人いますよね?他とけばねね私の名前ーかみむらやでーとか、さまざまくちからでますよね電波番号とか出てくることはこのRobloxで大いいと思います (★5)(24/11/13)
レビューをもっと見る

One thought on “Robloxが安全性と礼儀を強化:新たなAIソリューションを導入”

  1. Robloxの安全性と礼儀正しさを拡大する取り組みはとても重要だと感じました。特に、若いユーザーだけでなく、年齢の高いユーザーも増えている中で、安全性を確保するための取り組みが必要です。RobloxがAIソリューションを活用して安全性を向上させようとしている姿勢には、技術の進化と共に安全対策も進化していると感じました。

ご意見らくがき帳

匿名で自由にコメントしてください

カテゴリーへ移動して「ROBLOX」の最新情報をチェックしてください