- SciRepNetwork topological determinants of pathogen spreadPerez-Ortiz, M., Manescu, P., Caccioli, F., Fernández-Reyes, D., Nachev, P., and Shawe-Taylor, J.Scientific Reports 2022
How do we best constrain social interactions to decrease transmission of communicable diseases? Indiscriminate suppression is unsustainable long term and presupposes that all interactions carry equal importance. Instead, transmission within a social network has been shown to be determined by its topology. In this paper, we deploy simulations to understand and quantify the impact on disease transmission of a set of topological network features, building a dataset of 9000 interaction graphs using generators of different types of synthetic social networks. Independently of the topology of the network, we maintain constant the total volume of social interactions in our simulations, to show how even with the same social contact some network structures are more or less resilient to the spread. We find a suitable intervention to be specific suppression of unfamiliar and casual interactions that contribute to the network’s global efficiency. This is, pathogen spread is significantly reduced by limiting specific kinds of contact rather than their global number. Our numerical studies might inspire further investigation in connection to public health, as an integrative framework to craft and evaluate social interventions in communicable diseases with different social graphs or as a highlight of network metrics that should be captured in social studies.
- NatCommsSeasonal Arctic sea ice forecasting with probabilistic deep learningAndersson, T., Hosking, S., Perez-Ortiz, M., Paige, B., Elliott, A., Russell, C., Law, S., Jones, D., Wilkinson, J., and Phillips, T.Nature communications 2021
Anthropogenic warming has led to an unprecedented year-round reduction in Arctic sea ice extent. This has far-reaching consequences for indigenous and local communities, polar ecosystems, and global climate, motivating the need for accurate seasonal sea ice forecasts. While physics-based dynamical models can successfully forecast sea ice concentration several weeks ahead, they struggle to outperform simple statistical benchmarks at longer lead times. We present a probabilistic, deep learning sea ice forecasting system, IceNet. The system has been trained on climate simulations and observational data to forecast the next 6 months of monthly-averaged sea ice concentration maps. We show that IceNet advances the range of accurate sea ice forecasts, outperforming a state-of-the-art dynamical model in seasonal forecasts of summer sea ice, particularly for extreme sea ice events. This step-change in sea ice forecasting ability brings us closer to conservation tools that mitigate risks associated with rapid sea ice loss.
- IUIX5learn: A personalised learning companion at the intersection of AI and HCIPerez-Ortiz, M., Dormann, C., Rogers, Y., Bulathwela, S., Kreitmayer, S., Yilmaz, E., Noss, R., and Shawe-Taylor, J.In 26th International Conference on Intelligent User Interfaces 2021
X5Learn (available at https://x5learn.org ) is a human-centered AI-powered platform for supporting access to free online educational resources. X5Learn provides users with a number of educational tools for interacting with open educational videos, and a set of tools adapted to suit the pedagogical preferences of users. It is intended to support both teachers and students, alike. For teachers, it provides a powerful platform to reuse, revise, remix, and redistribute open courseware produced by others. These can be videos, pdfs, exercises and other online material. For students, it provides a scaffolded and informative interface to select content to watch, read, make notes and write reviews, as well as a powerful personalised recommendation system that can optimise learning paths and adjust to the user’s learning preferences. What makes X5Learn stand out from other educational platforms, is how it combines human-centered design with AI algorithms and software tools with the goal of making it intuitive and easy to use, as well as making the AI transparent to the user. We present the core search tool of X5Learn, intended to support exploring open educational materials.
- JMLRTighter risk certificates for neural networksPerez-Ortiz, M., Rivasplata, O., Shawe-Taylor, J., and Szepesvári, C.Journal of Machine Learning Research 2021
This paper presents an empirical study regarding training probabilistic neural networks using training objectives derived from PAC-Bayes bounds. In the context of probabilistic neural networks, the output of training is a probability distribution over network weights. We present two training objectives, used here for the first time in connection with training neural networks. These two training objectives are derived from tight PAC-Bayes bounds. We also re-implement a previously used training objective based on a classical PAC-Bayes bound, to compare the properties of the predictors learned using the different training objectives. We compute risk certificates for the learnt predictors, based on part of the data used to learn the predictors. We further experiment with different types of priors on the weights (both data-free and data-dependent priors) and neural network architectures. Our experiments on MNIST and CIFAR-10 show that our training methods produce competitive test set errors and non-vacuous risk bounds with much tighter values than previous results in the literature, showing promise not only to guide the learning algorithm through bounding the risk but also for model selection. These observations suggest that the methods studied here might be good candidates for self-certified learning, in the sense of using the whole data set for learning a predictor and certifying its risk on any unseen data (from the same distribution as the training data) potentially without the need for holding out test data.
- VisionSpatio-chromatic contrast sensitivity under mesopic and photopic light levelsWuerger, Sophie, Ashraf, Maliha, Kim, Minjung, Martinovic, Jasna, Perez-Ortiz, M., and Mantiuk, Rafał KJournal of Vision 2020
Contrast sensitivity functions (CSFs) characterize the sensitivity of the human visual system at different spatial scales, but little is known as to how contrast sensitivity for achromatic and chromatic stimuli changes from a mesopic to a highly photopic range reflecting outdoor illumination levels. The purpose of our study was to further characterize the CSF by measuring both achromatic and chromatic sensitivities for background luminance levels from 0.02 cd/m2 to 7,000 cd/m2. Stimuli consisted of Gabor patches of different spatial frequencies and angular sizes, varying from 0.125 to 6 cpd, which were displayed on a custom high dynamic range (HDR) display with luminance levels up to 15,000 cd/m2. Contrast sensitivity was measured in three directions in color space, an achromatic direction, an isoluminant “red-green” direction, and an S-cone isolating “yellow-violet” direction, selected to isolate the luminance, L/M-cone opponent, and S-cone opponent pathways, respectively, of the early postreceptoral processing stages. Within each session, observers were fully adapted to the fixed background luminance (0.02, 2, 20, 200, 2,000, or 7,000 cd/m2). Our main finding is that the background luminance has a differential effect on achromatic contrast sensitivity compared to chromatic contrast sensitivity. The achromatic contrast sensitivity increases with higher background luminance up to 200 cd/m2 and then shows a sharp decline when background luminance is increased further. In contrast, the chromatic sensitivity curves do not show a significant sensitivity drop at higher luminance levels. We present a computational luminance-dependent model that predicts the CSF for achromatic and chromatic stimuli of arbitrary size.
- AAAITruelearn: A family of bayesian algorithms to match lifelong learners to open educational resourcesBulathwela, S., Perez-Ortiz, M., Yilmaz, E., and Shawe-Taylor, J.In Proceedings of the AAAI Conference on Artificial Intelligence 2020
The recent advances in computer-assisted learning systems and the availability of open educational resources today promise a pathway to providing cost-efficient high-quality education to large masses of learners. One of the most ambitious use cases of computer-assisted learning is to build a lifelong learning recommendation system. Unlike short-term courses, lifelong learning presents unique challenges, requiring sophisticated recommendation models that account for a wide range of factors such as background knowledge of learners or novelty of the material while effectively maintaining knowledge states of masses of learners for significantly longer periods of time (ideally, a lifetime). This work presents the foundations towards building a dynamic, scalable and transparent recommendation system for education, modelling learner’s knowledge from implicit data in the form of engagement with open educational resources. We i) use a text ontology based on Wikipedia to automatically extract knowledge components of educational resources and, ii) propose a set of online Bayesian strategies inspired by the well-known areas of item response theory and knowledge tracing. Our proposal, TrueLearn, focuses on recommendations for which the learner has enough background knowledge (so they are able to understand and learn from the material), and the material has enough novelty that would help the learner improve their knowledge about the subject and keep them engaged. We further construct a large open educational video lectures dataset and test the performance of the proposed algorithms, which show clear promise towards building an effective educational recommendation system.
- IEEE Image Proc.From pairwise comparisons and rating to a unified quality scalePerez-Ortiz, M., Mikhailiuk, A., Zerman, E., Hulusic, V., Valenzise, G., and Mantiuk, R.IEEE Transactions on Image Processing 2019
The goal of psychometric scaling is the quantification of perceptual experiences, understanding the relationship between an external stimulus, the internal representation and the response. In this paper, we propose a probabilistic framework to fuse the outcome of different psychophysical experimental protocols, namely rating and pairwise comparisons experiments. Such a method can be used for merging existing datasets of subjective nature and for experiments in which both measurements are collected. We analyze and compare the outcomes of both types of experimental protocols in terms of time and accuracy in a set of simulations and experiments with benchmark and real-world image quality assessment datasets, showing the necessity of scaling and the advantages of each protocol and mixing. Although most of our examples focus on image quality assessment, our findings generalize to any other subjective quality-of-experience task.
- NeurocomputingOn the use of evolutionary time series analysis for segmenting paleoclimate dataPerez-Ortiz, M., Duran-Rosal, A., Gutierrez, P., Sanchez-Monedero, J., Nikolaou, At., Fernandez-Navarro, F., and Hervas-Martinez, C.Neurocomputing 2019
Recent studies propose that different dynamical systems, such as climate, ecological and financial systems, among others, present critical transition points named to as tipping points (TPs). Climate TPs can severely affect millions of lives on Earth so that an active scientific community is working on finding early warning signals. This paper deals with the development of a time series segmentation algorithm for paleoclimate data in order to find segments sharing common statistical patterns. The proposed algorithm uses a clustering-based approach for evaluating the solutions and six statistical features, most of which have been previously considered in the detection of early warning signals in paleoclimate TPs. Due to the limitations of classical statistical methods, we propose the use of a genetic algorithm to automatically segment the series, together with a method to compare the segmentations. The final segments provided by the algorithm are used to construct a prediction model, whose promising results show the importance of segmentation for improving the understanding of a time series.
- JMLRORCA: a matlab/octave toolbox for ordinal regressionSanchez-Monedero, J., Gutierrez, P., and Perez-Ortiz, M.Journal of Machine Learning Research 2019
Ordinal regression, also named ordinal classification, studies classification problems where there exist a natural order between class labels. This structured order of the labels is crucial in all steps of the learning process in order to take full advantage of the data. ORCA (Ordinal Regression and Classification Algorithms) is a Matlab/Octave frame- work that implements and integrates different ordinal classification algorithms and specif- ically designed performance metrics. The framework simplifies the task of experimental comparison to a great extent, allowing the user to: (i) describe experiments by simple configuration files; (ii) automatically run different data partitions; (iii) parallelize the ex- ecutions; (iv) generate a variety of performance reports and (v) include new algorithms by using its intuitive interface. Source code, binaries, documentation, descriptions and links to data sets and tutorials (including examples of educational purpose) are available at https://github.com/ayrna/orca.
- Liver Trans.Validation of artificial neural networks as a methodology for donor-recipient matching for liver transplantationAyllon, M., Ciria, R., Cruz-Ramirez, M., Perez-Ortiz, M., Gomez, I., Valente, R., O’Grady, J., Mata, M., Hervas-Martinez, C., Heaton, N., and others,Liver Transplantation 2018
In 2014, we reported a model for donor-recipient (D-R) matching in liver transplantation (LT) based on artificial neural networks (ANNs) from a Spanish multicenter study (Model for Allocation of Donor and Recipient in España [MADR-E]). The aim is to test the ANN-based methodology in a different European health care system in order to validate it. An ANN model was designed using a cohort of patients from King’s College Hospital (KCH; n = 822). The ANN was trained and tested using KCH pairs for both 3- and 12-month survival models. End points were probability of graft survival (correct classification rate [CCR]) and nonsurvival (minimum sensitivity [MS]). The final model is a rule-based system for facilitating the decision about the most appropriate D-R matching. Models designed for KCH had excellent prediction capabilities for both 3 months (CCR–area under the curve [AUC] = 0.94; MS-AUC = 0.94) and 12 months (CCR-AUC = 0.78; MS-AUC = 0.82), almost 15% higher than the best obtained by other known scores such as Model for End-Stage Liver Disease and balance of risk. Moreover, these results improve the previously reported ones in the multicentric MADR-E database. In conclusion, the use of ANN for D-R matching in LT in other health care systems achieved excellent prediction capabilities supporting the validation of these tools. It should be considered as the most advanced, objective, and useful tool to date for the management of waiting lists.
- IJCNNA mixture of experts model for predicting persistent weather patternsPerez-Ortiz, M., Gutierrez, P., Tino, P., Casanova-Mateo, C., and Salcedo-Sanz, S.In 2018 International Joint Conference on Neural Networks (IJCNN) 2018
Weather and atmospheric patterns are often persistent. The simplest weather forecasting method is the so-called persistence model, which assumes that the future state of a system will be similar (or equal) to the present state. Machine learning (ML) models are widely used in different weather forecasting applications, but they need to be compared to the persistence model to analyse whether they provide a competitive solution to the problem at hand. In this paper, we devise a new model for predicting low-visibility in airports using the concepts of mixture of experts. Visibility level is coded as two different ordered categorical variables: cloud height and runway visual height. The underlying system in this application is stagnant approximately in 90% of the cases, and standard ML models fail to improve on the performance of the persistence model. Because of this, instead of trying to simply beat the persistence model using ML, we use this persistence as a baseline and learn an ordinal neural network model that refines its results by focusing on learning weather fluctuations. The results show that the proposal outperforms persistence and other ordinal autoregressive models, especially for longer time horizon predictions and for the runway visual height variable.