Common use of Research Challenges Clause in Contracts

Research Challenges. There have been past initial attempts to build explanatory models of performance based on linear models validated through ANOVA but they are still far from satisfactory. Past approaches typically relied on the generation of all the possible combinations of components under examina- tion, leading to an explosion in the number of cases to consider. Therefore, we need to develop greedy approaches to avoid such a combinatorial explosion. Moreover, the assumptions under- lying IR models and methods, datasets, tasks, and metrics should be identified and explicitly formulated, in order to determine how much we are departing from them in a specific application and leverage this knowledge to more precisely explain observed performance. We need a better understanding of evaluation metrics Not all the metrics may be equally good in detecting the effect of different components and we need to be able to predict which metric fits components and interaction better. Sets of more specialized metrics representing different user standpoints should be employed and the relationships between system-oriented and user-/task- oriented evaluation measures (e.g. satisfaction, usefulness) should be determined. A related research challenge is how to exploit richer explanations of performance to design better and more re-usable experimental collections where the influence and bias of undesired and confounding factors is kept under control. Most importantly, we need to determine the features of datasets, systems, contexts, and tasks that affect the performance of a system. These features together with the developed explanatory performance models can be eventually exploited to train predictive models able to anticipate the performance of IR systems in new and different operational conditions.

Appears in 3 contracts

Sources: End User Agreement, End User Agreement, End User Agreement