Common use of Research Challenges Clause in Contracts

Research Challenges. These general research questions manifest themselves along the entire information retrieval “stack” and motivate a broad range of concrete research directions to be investigated: Does the desire to present fair answers to users necessitate different content acquisition meth- ods? If traceability is essential, how can we make sure that basic normalization steps — such as content filtering, named entity normalization, etc. — do not obfuscate this? How can we give assurances in terms of fairness towards novel retrieval paradigms (e.g., neural retrieval mod- els being trained and evaluated on historic relevance labels obtained from pooling mainly exact term-matching systems)? How should we design an information retrieval system’s logging and experimental environment in a way that guarantees fair, confidential, and accurate offline and online evaluation and learning? Can exploration policies be designed such that they comply with guarantees on performance? How are system changes learned online made explainable? Indexing structures and practices need to be designed/revisited in terms of their ability to ac- commodate downstream fairness and transparency operations. This may pose novel requirements towards compression and sharding schemes as fair retrieval systems begin requesting different aggregate statistics that go beyond what is currently required for ranking purposes. Interface design is faced with the challenge of presenting the newly generated types of infor- mation (such as provenance, explanations or audit material) in a useful manner while retaining effectiveness towards their original purpose. Retrieval models are becoming more complex (e.g., deep neural networks for IR) and will require more sophisticated mechanisms for explainability and traceability. Models, especially in conversational interaction contexts, will need to be “interrogable”, i.e., make effective use of users’ queries about explainability (e.g., “why is this search result returned?”). Recommender systems have a historic demand for explainability geared towards boosting adop- tion and conversion rates of recommendations. In addition to these primarily economic considera- tions, transparent and accountable recommender systems need to advance further and ensure fair and auditable recommendations that are robust to changes in product portfolio or user context. Such interventions may take a considerably different shape than those designed for explaining the results of ranked retrieval systems. User models will face the novel challenges of personalizing retrieval services in a fair, explainable, and transparent manner. This is particularly relevant in the context of diversity and the way in which biased or heavily polarizing topics and information sources are handled. Additionally, transparent retrieval systems will require new personalization techniques that determine the right level of explanation that fits different sets of requirements (e.g., explanations that are effective for novice searchers, professional journalists or policy makers vs. explanations for highly technology- affine search engineers investigating system failures). Finally, such personalization should be reliable in terms of robustness to confounding external context changes. Efficiency will be a key challenge in serving explanations at real time. Structures and models will need to accommodate for on-demand calculation as well as caching or approximate explanations in order to meet run time and latency goals. In addition, a key challenge will be the design of indexing structures and models that are fair without compromising efficiency. and evaluation.

Appears in 3 contracts

Sources: End User Agreement, End User Agreement, End User Agreement