Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 14 submissions in the queue.
posted by martyb on Monday August 24 2020, @06:45AM   Printer-friendly
from the department-of-unwanted-hyperfocus dept.

Researchers at the Cornell and the Technische Univerität Berlin and Cornell have studied the problem that more popular items get priority in search results, creating a positive feedback loop that unfairly deprecates other, equally valuable items.

Rankings are the primary interface through which many online platforms match users to items (e.g. news, products, music, video). In these two-sided markets, not only the users draw utility from the rankings, but the rankings also determine the utility (e.g. exposure, revenue) for the item providers (e.g. publishers, sellers, artists, studios). It has already been noted that myopically optimizing utility to the users – as done by virtually all learning-to-rank algorithms – can be unfair to the item providers. We, therefore, present a learning-to-rank approach for explicitly enforcing merit-based fairness guarantees to groups of items (e.g. articles by the same publisher, tracks by the same artist). In particular, we propose a learning algorithm that ensures notions of amortized group fairness, while simultaneously learning the ranking function from implicit feedback data. The algorithm takes the form of a controller that integrates unbiased estimators for both fairness and utility, dynamically adapting both as more data becomes available. In addition to its rigorous theoretical foundation and convergence guarantees, we find empirically that the algorithm is highly practical and robust.

Journal Reference:
Marco Morik, Ashudeep Singh, Jessica Hong, and Thorsten Joachims. 2020. Controlling Fairness and Bias in Dynamic Learning-to-Rank. In Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '20), July 25–30, 2020, Virtual Event, China. ACM, NewYork, NY, USA. DOI: https://doi.org/10.1145/3397271.3401100

Maybe this, if deployed widely, can help reduce the tendencies for discourse to develop isolated silos.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by The Mighty Buzzard on Tuesday August 25 2020, @03:07PM (2 children)

    by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Tuesday August 25 2020, @03:07PM (#1041635) Homepage Journal

    What the individual making the search wants is irrelevant, serving the correct thing to the most people is their job.

    --
    My rights don't end where your fear begins.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Tuesday August 25 2020, @06:36PM (1 child)

    by Anonymous Coward on Tuesday August 25 2020, @06:36PM (#1041744)

    No, that's not even wrong. The search engine isn't supposed to care what you search for, whether it be popular or not. They're supposed to give you what you ask for and what you ask for is supposed to be what you want. You're not going to ever know enough as the search engine to know better than the user what the user currently wants.

    • (Score: 2) by The Mighty Buzzard on Wednesday August 26 2020, @02:10PM

      by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Wednesday August 26 2020, @02:10PM (#1042130) Homepage Journal

      How stupid are you? Search engines can not read minds, so they absolutely can not know better than the user what the user wants. They can only make a guess, which is going to be largely based on what most people searching for $foo want.

      --
      My rights don't end where your fear begins.