URL TO REGISTER: CLICK HERE
When: Feb 25 2019, 11 am PST
What You Gain:
- How AI systems can suffer from the same biases as human experts
- How that could lead to biased results
- Examine how testers, data scientists, and other stakeholders can develop test cases to recognize biases, both in data and the resulting system
- Ways to address those biases
- How data influences
- How machine learning systems make decisions
- How selecting the wrong data, or ambiguous data, can bias machine learning results
- Why we don’t have insight into how machine learning systems make decisions
- How we can identify and correct bias in machine learning systems
Who Should Attend?
- Test Engineers- Yes
- Test Architects- Yes
- Mobile/Automation Engineers- Yes
- QA Managers- Yes
- QA Directors- Yes
- VP QA- Yes
- CTO- Yes
- Anyone who has curiosity to know about SQA/Testing Automation
Why Machine Learning Applications Are Like People
When you train AI systems using human data, the result is human bias.
We would like to think that AI-based machine learning systems always produce the right answer within their problem domain. In reality, their performance is a direct result of the data used to train them. The answers in production are only as good as that training data.
Data collected by a human such as surveys, observations, or estimates, can have built-in human biases. Even objective measurements can be measuring the wrong things or can be missing essential information about the problem domain.
The effects of biased data can be even more deceptive. AI systems often function as black boxes, which means technologists are unaware of how an AI came to its conclusion. This can make it particularly hard to identify any inequality, bias, or discrimination feeding into a particular decision.