• 0 Posts
  • 7 Comments
Joined 1 year ago
cake
Cake day: July 6th, 2023

help-circle

  • Hm… I’d actually disagree with that conclusion? I think what the author is saying there is that ableism isn’t simply a matter of the words being used. A statement that treats disabled people as subhuman isn’t okay because it avoids using these words - it’s still ableist.

    From the beginning of the article (emphasis mine):

    Note that only some of the words on this page are actually slurs. Many of the words and phrases on this page are not generally considered slurs, and in fact, may not actually be hurtful, upsetting, retraumatizing, or offensive to many disabled people. They are simply considered ableist (the way that referring to a woman as emotionally fragile is sexist, but not a slur).

    Not everyone has the ability to be mindful of how certain language originated in ableism and this reinforces it. But for those of us who can, it’s a good idea to try.






  • It also said it would pay realistic premiums for certain product attributes, such as toothpaste with fluoride and deodorant without aluminum.

    Most toothpastes in the US have fluoride - it’s the ones that don’t which likely cost more (ones with “natural” ingredients, ones with hydroxyapatite…).

    The startup Synthetic Users has set up a service using OpenAI models in which clients—including Google, IBM, and Apple—can describe a type of person they want to survey, and ask them questions about their needs, desires, and feelings about a product, such as a new website or a wearable. The company’s system generates synthetic interviews that co-founder Kwame Ferreira says are “infinitely richer” and more useful than the “bland” feedback companies get when they survey real people.

    It amuses me greatly to think that companies trying to sell shit to people will be fooled by “infinitely richer” feedback. Real people give “bland” feedback because they just don’t care that much about a product, but I guess people would rather live in a fantasy where their widget is the next best thing.

    Overall, though, this horrifies me. Psychological research already has plenty of issues with replication and changing methodologies and/or metrics mid-study, and now they’re trying out “AI” participants? Even if it’s just used to create and test surveys that eventually go out to humans, it seems ripe for bias.

    I’ll take a example close to home - take studies on CFS/ME. A lot of people on the internet (including doctors), think CFS/ME is hypochondria, or malingering, or due to “false illness beliefs” - so how is an “AI” trained on the internet tasked with thinking like a CFS/ME patient going to answer questions?

    As patients we know what to look for when it comes to insincere/leading questions. “Do you feel anxious before exercise?” - the answer may be yes, because we know we’ll crash, but a question like this usually means researchers think resistance to activity is an irrational anxiety response that should be overcome. An “AI” would simply answer yes with no qualms or concerns, because it literally can’t think or feel (or withdraw from a study entirely).