I created a new, robust suite of fraud and abuse moderation tools for content on Amazon's platform from 2021 - 2023 using Machine Learning and AI tools.
Using ML, AI to automate content moderation and compliance of all user-generated content (UGC) for all of Amazon's businesses and platforms.
Working with Software Engineering, User Research, Amazon Worldwide Operations, and Product Management to design a suite of Content Moderation.
Create a successful suite of tools for the Developer Experience unifying the tools with a fresh look, feel and user experience. The design must explore new concepts that will expand and unite various tools and departments.
Design a successful product .... Design high-fidelity prototypes for integration of this product.
In the August 2021 survey, 100% of the users surveyed had experience with the legacy platform
and its specific issues. The common issues were with tooling (100%), visual differences - no image fingerprinting (100%), and
dependency on antiquated tools from many different platforms that require manual input without tracking nor cohesion (100%).
Operations staff and content moderation reviewers have issues with the UI layout of more than 10 different legacy
systems and need a way to visualize and resolve tasks quickly with factual data and comparing content to ensure they are or
aren't compliant.
Research, analyzing data, reviewing the top issues on all popular platforms where users seek help or file issues, and asking why the issue is common and what is the root cause of the relationship of the code, framework paradigms,and UI helped me understand the major frictions points and how to design for them. Usability testing, interviewing, observing and engaging in social outreach honed the solution direction.
Lorem Ipsum.
Within six weeks, launch an MVP of a text-review and test the ML automation's accuracy.
Within twelve weeks, launch the new design system for a Content Moderation UX/UI for a modular,
component-based platform.
Within twelve weeks, launch an MLP of a text-review and test the ML automation.
Within six weeks, launch the new design for a Decision Support Engine for Content Moderation for any
content type and compliance need as a modular, component-based platform.
The first reveal of the Content Moderation UX/UI prototype was created within a three week timetable.
The typical Content Moderator Operator experience can be frustrating to resolve tasks
efficiently . The typical workflow for resolving these has many opportunities for helping the user understand the paradigms
that are creating this frustrating issue.
My role was to identify the blockers and needs of the Developer/User within
the workflow and within the paradigms of the new Content Moderation Decision Support Engine. I designed the Operator's portal
to visualize, contextualize the content to quickly to make decisions on content, users, and account holders.
As illustrated, high-level workflow does not have an easy path to discovering the root cause and solution to resolve the render overflow issue, as seen here.
Designing the Decision Support Engine for visualizing how rules can be combined to create fact-based decisions.
Lorem Ipsum
Resolving technical issues and generating facts-based outcomes that rely on a complex matrix of regulations.
Creating the Decision Support Engine UI for visualizing what it takes to generate a decision based on policy and legal regulations.
One of the key issues Operators' experience with the legacy system had was content was without useful context to understand issues to resolve the tasks.
I explored the variations of content, top user needs and how they could be visualized for the user to reduce the time it takes
to create a decision and to inform ML and AI to make decisions.
Initial ideas and issues to reproduce the problem are
explored and studied. This application was used in User Research studies and iterations of testing as the design was
developed.
+ Lorem Ipsum
Launching Decision Support Engine Fact-Based Rules.
Decision Support Engine Fact-Based Rules
Enable this
feature with the new setting.
LOrem Ipsum
The Decision Support Engine is a powerful tool for visualizing and exploring the rules-based decision matrix. More to come.
Add Indication of Rules workflow and basic rule and debugging support allowing user to step through how the DSE generates code.
User Lorem IPsum
The Decision Support Engine is a powerful tool for visualizing and exploring
decision trees. The Decision Support Engine framework uses components as the core building block for generating rules-based
code to inform the platform, ML, AI, and manual reviews.
The Decision Support Engine helps users visualize and
explore the code trees, and can be used for understanding existing rules and diagnosing rule issues.
Early discussions and design concepts were shared via Slack, Chime, Quip, Figma and other internal Amazon products for rapid communication and development.
Objective:
Easy to understand workflow representing the layout alignment expectations and linear approach to
generating a rule.
Objective: Design an intuitive UI to show the Rule's content and how it will be utilized to generate an outcome.
Objective: Design an intuitive UI and UX to work with the Rule policy constraints.
of Note: Iterating on visualizing the Decision Support Engine parameters, and application.
Objective: Design an intuitive UI to show the application's rule content is continuing to inform the ML, AI, and backend.
Research shows the most common issue for Operators and Account Managers is understanding the requirements and policies for decision making and quickly finding fact-based evidence to render the decisions.
The alpha version of the Decision Support Engine was released in 2023 and demoed in 2022.
The Decision Support Engine is a powerful tool for visualizing rule trees. The Decision Support Engine framework uses ruless as the core building block and helps users visualize how Rules create outcomes on policy violations.