Roots, Repercussions and Rectification of Bias in NLP Transfer Learning

September 29, 2021 | 8:32 am
Benjamin Ajayi-Obe & David Hopes , Depop
Share

The popularization of large pre-trained language models has resulted in their increased adoption in commercial settings. However, these models are usually pre-trained on raw, uncurated corpora that are known to contain a plethora of biases. This often results in very undesirable behaviours from the model in real world situations that can cause societal or individual harm. In this talk, we explore the sources of this bias, as well as recent methods of measuring and mitigating it.

Let’s
chat

Leave us a message using our contact form and we’ll get back to you straight away.

If you’re eager to get started, give us a call now on 01908 465 570

Thanks

for reaching out, 🙏

A member of our team will be in touch shortly to arrange our chat.

Mmm 🍪cookies!

We use cookies to make your experience on this website better, and we have a variety to choose from. Use the toggles below to customise your selection or click 'Save my cookies' to get straight to the content.