Object recognition

Responsable: Àgata Lapedriza, David Masip 

Object recognition in images is still one of the most important research topics in computer vision. Given an image or a video, the goal of object recognition is to recognize and localize all the objects.

In the last recent years, this topic has experienced an impressive gain in performance with the use of Deep Neural Networks [1] and big datasets such as ImageNet [2]. Despite of the research efforts, object recognition is an unsolved problem. For the methods that perform in real time (such as Deformable Part Models [3]), the detection accuracy is low, while the methods that show higher performance can not run in real time. Actually, even the best current algorithms for object recognition are still far away from human performance. In this research line we focus on improving current systems, both in terms of accuracy and speed.

For more information please see http://sunai.uoc.edu/researchLines/Object.html or contact alapedriza@uoc.edu

[1] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In In Advances in Neural Information Processing Systems, 2012.

[2] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierar- chical image database. In Proc. CVPR, 2009.

[3] P. Felzenszwalb, R. Girshick, D. McAllester, D. Ramanan. Object Detection with Discriminatively Trained Part Based Models. IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 32, No. 9, Sep. 2010.

Scene recognition and understanding

Responsable: Àgata Lapedriza

Understanding complex visual scenes is one of the hallmark tasks of computer vision. Given a picture or a video, the goal of scene understanding is to build a representation of the content of a picture (e.g., what are the objects inside the picture, how are they related, if there are people in the picture what actions are they performing, what is the place depicted in the picture, etc).

With the appearance of large scale databases like ImageNet [1] and Places [2], and the recent success of machine learning techniques such as Deep Neural Networks [3], scene understanding has experienced a large amount of progress, making possible to build vision systems capable of addressing some of the mentioned tasks.

In this research line, in collaboration with the computer vision group at the Massachusetts Institute of Technology, our goal is to improve existing algorithms for scene understanding and to define new problems that become reachable now, thanks to the recent advances in neural networks and machine learning.

For more information please see http://sunai.uoc.edu/researchLines/Scene.html or contact alapedriza@uoc.edu

[1] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierar- chical image database. In Proc. CVPR, 2009.

[2] B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. "Learning Deep Features for Scene Recognition using Places Database." Advances in Neural Information Processing Systems 27 (NIPS), 2014.

[3] A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In In Advances in Neural Information Processing Systems, 2012.

Recognition of facial expressions

Responsable: David Masip 
Facial expressions are very a important source of information for the development of new technologies. Humans communicate our emotions through our faces, and Psychologists have studied emotions in faces since the early works of Charles Darwin [1]. One of the most successful emotion models is the Facial Action Coding System (FACs [2]), where a particular set of action units (facial muscle movements) act as the building blocks of 6 basic emotions (happiness, surprise, fear, anger, disgust, sadness).

The automatic understanding of this universal language (very similar in almost all cultures) is one of the most important research areas in computer vision, with applications in many fields, such as design of intelligent user interfaces, human-computer interaction, diagnosis of disorders or even the reactive publicity field. In this research line we propose to apply and design state-of-the-art supervised algorithms to detect and classify emotions and Action Units.

Nevertheless, there exist far more emotions in addition to this basic set. We can predict with better than chance accuracy: the results of a negotiation, the preferences of the users in binary decisions [3], the deception perception, etc. In this research line we collaborate with the Social Perception Lab from Princeton University (http://tlab.princeton.edu/) to apply automated algorithms to real data from Pyschology Labs.

For more information please see http://sunai.uoc.edu/researchLines/facialExpression.html or contact dmasipr@uoc.edu.

[1] Darwin, Charles (1872), The expression of the emotions in man and animals, London: John Murray.

[2] P. Ekman and W. Friesen. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, 1978.

[3] Masip D, North MS, Todorov A, Osherson DN (2014) Automated Prediction of Preferences Using Facial Expressions. PLoS ONE 9(2): e87434. doi:10.1371/journal.pone.0087434

Human Pose Recovery and Behavior Analysis

Responsable: Xavier Baró 
Human Action/Gesture recognition is a challenging area of research that deals with the problem of recognizing people in images, detecting and describing body parts, inferring their spatial configuration, and performing action/gesture recognition from still images or image sequences, also including multi-modal data. Because of huge space of human configurations, body pose recovery is a difficult problem that involves dealing with several distortions: illumination changes, partial occlusions, changes in the point of view, rigid and elastic deformations, or high inter and intra-class variability, just to mention a few. Even with the high difficulty of the problem, modern Computer Vision techniques and new tendencies deserve further attention, and promising results are expected in the next years.

Moreover, several subareas have been recently defined, such as Affective Computing, Social Signal Processing, Human Behavior Analysis, or Social Robotics. The effort involved in this area of research will be compensated by its potential applications: TV production, home entertainment (multimedia content analysis), education purposes, sociology research, surveillance and security, improved quality live by means of monitoring or automatic artificial assistance, etc.

For more information please see http://sunai.uoc.edu/researchLines/HPRBA.html or contact xbaro@uoc.edu

Biologically-inspired computer vision

Responsable: David Masip 
The complex network that forms the human brain allows us to recognize thousands of objects, actions and scenes in a few milliseconds with almost no effort. The vision problem can be considered solved in the biological brain, but we are far from achieving satisfying solutions in computational systems. In this topic we propose to develop computational algorithms inspired in the human brain, or more specifically, in the ventral visual processing stream. Previous research has shown that a specifically tuned set of bank of filters can mimic the V1 IT cortex (inferior temporal cortex) [1]. Nevertheless, the "untangle" process that obtains meaningful information from "pixel" images in the retina remains unknown. We will propose deep learning architectures [2] to obtain robust computational algorithms applied to visual tasks in the machine.

The algorithms developed will be applied to real computer vision problems applied to Neuroscience, in the Princeton Neuroscience institute, and range from detection and tracking of rodents in low resolution videos, image segmentation and limb detection, and motion estimation of whiskers using high speed cameras.

For more information please see http://sunai.uoc.edu/researchLines/bioCV.html or contact dmasipr@uoc.edu

[1] Serre, Thomas, Lior Wolf, Stanley Bileschi, Maximilian Riesenhuber, and Tomaso Poggio. "Robust object recognition with cortex-like mechanisms."Pattern Analysis and Machine Intelligence, IEEE Transactions on 29, no. 3 (2007): 411-426.

[2] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." In Advances in neural information processing systems, pp. 1097-1105. 2012.

Parallel and Distributed Scientific Applications: Performance and Efficiency

Responsable: Josep Jorba
Currently, the growth in parallel and distributed programming paradigms and environments, available for performing concurrent computations, has different bottlenecks for providing efficient applications.

We need to know the platforms, their performance, the underlying hardware and networking technologies, and we must be able to produce optimized software that statically or dynamically may take advantage of the computational resources available.

In this research line, we study different approaches to do better scientific applications, and to make tools (via automatic performance analysis), which can understand the application model and the underlying programming paradigm, and try to tune their performance to a dynamically changing computational environment, in which the resources (and their characteristics) can be homogeneous or heterogeneous depending on the hardware platform. In particular we focused our research on shared memory and message-passing paradigms, and in many/multi-core environments including multi-core CPUs, GPUs (graphic cards computing) and cluster/grid/cloud/super computing platforms.

Contact1: http://in3.uoc.edu/opencms_portalin3/opencms/en/investigadors/list/jorba_esteva_josep

Contact2: https://www.researchgate.net/profile/Josep_Jorba_Esteve

Reliability and availability issues in distributed computer systems

Responsable: Joan Manuel Marquès
Increasingly, services are being deployed over large-scale computational and storage infrastructures. To meet ever-increasing computational demands and reduce both hardware and system administration costs, these infrastructures have begun to include Internet resources distributed over enterprise and residential broadband networks. As these infrastructures increase in scale to hundreds of thousands to millions of resources, issues of resource availability and service reliability inevitably emerge.

Contributive distributed systems based on non-dedicated resources provide attractive benefits but face at least one significant challenge: nodes can enter and leave the collective on a whim, because each machine may be separately owned and managed. At the same time, resource availability is critical for the reliability and responsiveness (low response latency) of services. Groups of available resources are often required to execute tightly coupled distributed and parallel algorithms of services. Moreover, load spikes observed with Internet services require guarantees that a collection of resources is available.

The goal in this research line is to determine and evaluate predictive methods that ensure the availability of a collection of resources. We strive to achieve collective availability from non-dedicated resources distributed around Internet, arguably the most unreliable type of resource worldwide. Our predictive methods could work in coordination with a virtualization layer for masking resource unavailability.

More details: http://in3.uoc.edu/opencms_portalin3/opencms/en/investigadors/list/marques_puig_joan_manuel

Community cloud systems based on non-dedicated resources

Responsable: Joan Manuel Marquès
Many online communities exist nowadays: social networks, open source development, Wikipedia, Wikileaks etc. These communities have thousands or even millions of participants and are commonly hosted in resources belonging to entities not directly related with participants in the community. Considering that most of the participants' computers are underutilised - i.e. have more resources than are actually used in daily activity - we are studying how to take advantage of these surplus resources to be able to host communities using only the resources provided by their members. We name these kinds of communities contributory communities. More precisely, we are working on a) availability prediction of non-dedicated resources; b) privacy guarantees, especially anonymity; and c) efficient deployment and storage.

Therefore, we are looking for researchers interested in large-scale distributed systems applied to contributory communities in fields such as availability guarantees, privacy, efficient deployment of services or reliable storage.

More details: http://in3.uoc.edu/opencms_portalin3/opencms/en/investigadors/list/marques_puig_joan_manuel

Simulation environments for networking and distributed systems

Responsable: Joan Manuel Marquès
Large-scale testbeds like PlanetLab open new opportunities to test and simulate network protocols, distributed systems or applications. An open issue in these kinds of systems is how to create, deploy, execute and monitor experiments in a coordinated way, in order to take advantage of the real deployment of systems and protocols. A great deal of research needs to be undertaken to understand the kind of systems that can benefit from these environments and the systems that are better simulated in classical simulation environments. Adapting existing simulation environments to these new systems is not a trivial process and a great deal of work has to be done in order to characterize the distributed environment and its coordination needs. We are especially interested in building simulation environments for distributed systems or networking in large-scale testbeds both for research and teaching.

More details: http://in3.uoc.edu/opencms_portalin3/opencms/en/investigadors/list/marques_puig_joan_manuel

Time Synchronized Channel Hopping (TSCH)

Responsable: Xavi Vilajosana
The revolution of operational technologies in industries is being lead by a new generation of low power wireless standards rooted at a technique known as Time Synchronized Channel Hopping (TSCH). TSCH was designed to allow IEEE802.15.4 devices to support a wide range of applications including, but not limited to, industrial ones . At its core is a medium access technique which uses time synchronization to achieve ultra low-power operation and channel hopping to enable high reliability. Synchronization accuracy impacts power consumption, and can vary from micro-seconds to milli-seconds depending on the solution.

IEEE802.15.4e is the latest generation of ultra-lower power and reliable networking solutions for LLNs. [RFC5673] discusses industrial applications, and highlights the harsh operating conditions as well as the stringent reliability, availability, and security requirements for an LLN to operate in an industrial environment. In these environments, vast deployment environments with large (metallic) equipment cause multi-path fading and interference to thwart any attempt of a single-channel solution to be reliable; the channel agility of TSCH is the key to its ultra high reliability. Commercial networking solutions are available today in which nodes consume 10's of micro-amps on average with end-to-end packet delivery ratios over 99.999%.

IEEE802.15.4e TSCH focuses on the MAC layer only. This clean layering allows for TSCH to fit under an IPv6 enabled protocol stack for Low Power Lossy Networkss, running 6LoWPAN [RFC6282], IPv6 Routing Protocol for Low power and Lossy Networks (RPL) [RFC6550] and the Constrained Application Protocol (CoAP) [RFC7252]. What is missing is a Logical Link Control (LLC) layer between the IP abstraction of a link and the TSCH MAC, which is in charge of scheduling a timeslot for a given packet coming down the stack from the upper layer.

While [IEEE802154e] defines the mechanisms for a TSCH node to communicate, it does not define the policies to build and maintain the communication schedule, match that schedule to the multi-hop paths maintained by RPL, adapt the resources allocated between neighbor nodes to the data traffic flows, enforce a differentiated treatment for data generated at the application layer and signaling messages needed by 6LoWPAN and RPL to discover neighbors, react to topology changes, self-configure IP addresses, or manage keying material.

The research line aims to study mechanism to attain efficient resource utilization, allocation and management in low power wireless sensor networks in the context of the IEEE802.15.4e and the OpenWSN.org initiative. It is focused on the study of scheduling techniques to access to the wireless medium taking into account energy consumption and energy availability amongst others. The line also aims to push forward the management capabilities of such constrained networks by developing the concept of differentiated services, label switching and resource reservation in LLNs. The topic is currently being studied at the IETF 6TiSCH working group where the candidates will contribute their results.

Optimization and Simulation of Industrial and Engineering Systems

Responsable: Angel A. Juan 
Internet Computing & Systems Optimization (ICSO) is an IN3 official programme supported by the DPCS research group. One of the main research topics in ICSO is the development of new hybrid algorithms and methods which combine applied optimization (e.g. heuristics and metaheuristics), discrete-event simulation, and data analysis to support decision-making processes in realistic environments. In particular, we are interested in real-life applications of these algorithms to logistics, transportation and production systems. Thus, the PhD theses will be related to any of the following topics: Rich (real-life) Vehicle Routing Problems, real-life Scheduling Problems, Green Logistics, Intelligent Transport Systems, Horizontal Collaboration in Logistics, etc. These topics represent important challenges for industrial sectors in any developed country, which explains their relevance in the context of current international research.

Webs: http://dpcs.uoc.edu (HAROSA KC), http://in3.uoc.edu (ICSO Programme), http://ajuanp.wordpress.com

Outsourcing of ICT services for companies and public administrations

Responsable: Josep Maria Marco-Simó 
After undertaking the in-depth case study of the Framework Agreements for the Standardisation of ICT Services for the Government of Catalonia, which constituted the first PhD thesis of the group, ICSS intends to continue working in the wider research area of outsourcing of ICT services by both private companies and public administrations, including universities.

Procurement and implementation services for integrated systems

Responsable: Joan Antoni Pastor 
We continue working in the analysis and development of procurement and implementation methodologies for integrated information systems such as those based upon customer relationship management (CRM) systems, supply chain management (SCM) systems, enterprise application integration (EAI) projects and information systems integration in general, as well as enterprise resource planning (ERP) systems. We also include within our interests along these lines the analysis and development of models and methods for decisional information systems such as those based on business intelligence technology.

Novel approaches for ICT services management and governance

Responsable: Joan Antoni Pastor 
Our work and interest focuses on the analysis and development of models and methods for managing ICT services, informed by the principles of IT governance and agile operations. To this effect, one chief aim of the group is to come to grips with, and offer solutions to, the main professional problems found in the domain of strategic management and planning of information systems and ICT services.

Innovative curriculum engineering for ICT programmes at any level

Responsable: Maria Jesus Marco-Galindo 
Although probably more fit for other UOC PhD programmes such as "Education and ICT", we are interested in researching novel ICT curriculum experiences and approaches at any level, including university and vocational studies, which we aim to do in collaboration with other disciplines such as pedagogy. We already have an advanced PhD thesis on analysis of the case of UOC's innovation on communicative written stills for ICT students, and the design and development of a new transversal system-service in this area.

Novel formal approaches and methods for researching ICT and IS

Responsable: Joan Antoni Pastor 
Usually orthogonal to and integrated with other research lines, we are explicitly interested in experimentation and publication with suitable research methods for investigating problems in our field (case studies, action research, grounded theory, design science research, PLS).

Digital media security, privacy and forensics (steganography, watermarking, fingerprinting and steganalysis)

Responsable: David Megías 
The security and privacy of digital media contents has been attracting the attention of academia and industry for the past two decades. Since the copy of digital contents can be performed without any loss and with no cost, content vendors and producers are trying to design mechanisms either to avoid or to detect unauthorised copies. Steganography, watermarking and fingerprinting for images, audio and video contents are being investigated by different groups worldwide in order to produce practical solutions to these kinds of problems while satisfying required properties such as security, privacy, capacity, robustness and transparency.

On the other hand, steganography is also used to send concealed messages in an apparently innocent cover object. Steganalysis techniques are being developed in order to detect whether a multimedia object contains secret information which may be used for malicious purposes.

In general, these topics belong to computer forensic techniques that can be used to show legal evidence of illegal or criminal actions. This research line is related to all these issues, with a special focus on networked distribution systems such as online social networks or peer-to-peer applications.

Research group information:

http://in3.uoc.edu/opencms_portalin3/opencms/en/recerca/list/kison_k-ryptography_and_information_security_for_open_networks

Privacy on social networks

Responsable: Jordi Casas 
In recent years, an explosive increase of network data has been made publicly available. Embedded within this data there is private information about users who appear in it. Therefore, data owners must respect the privacy of users before releasing datasets to third parties. In this scenario, anonymization processes become an important concern. Many studies reveal that though some user groups are less concerned by data owners sharing data about them, up to 90% of members in others groups disagree with this principle.

Simple technique of anonymizing networks by removing the identities of the vertices (users) before publishing the actual network does not guarantee privacy. There exist adversaries that can infer the identity of the users by solving a set of restricted graph isomorphism problems. Therefore, some approaches and methods have been imported from anonymization on relational data, but the peculiarities of anonymizing network data avoid these methods to work directly on graph-formatted data.

The aim of this research is the design of privacy-preserving methods and algorithms that guarantee the users' privacy while keep data utility as close as possible to the original data. A good anonymization method has to achieve a trade-off between data privacy and data utility. Consequently, several gaph-mining tasks must be considered in order to quantify the information loss produced on anonymous data.

Research group information:

http://in3.uoc.edu/opencms_portalin3/opencms/en/recerca/list/kison_k-ryptography_and_information_security_for_open_networks

Security in cognitive radio networks

Responsable: Carles Garrigues, Helena Rifà
Spectrum is an essential resource for the provision of mobile services. In order to control and delimit its use, governmental agencies set up regulatory policies. Unfortunately, such policies have led to a deficiency of spectrum as few frequency bands are left unlicensed, and these are used for the majority of new emerging wireless applications.

Cognitive radio networks try to alleviate the spectrum scarcity problem by designing a system in which the licensed spectrum can be used opportunistically. Cognitive radio terminals form self-organizing cooperative networks with the ability to sense their electromagnetic environment, find the spectrum holes, and adjust their operating parameters to access these free bands.

Within the realm of cognitive radio networks security is a crucial requirement to avoid harmful interference to licensed users. Although this is an active research area that has successfully progressed in recent years, there are still challenges to be addressed such as efficient authentication and encryption of data, robust cooperation between users, the trading of the spectrum, or detection and identification of malicious nodes.

Therefore, we are looking for researchers interested in security aspects applied to cooperative wireless networks.

Research group information:

http://in3.uoc.edu/opencms_portalin3/opencms/en/recerca/list/kison_k-ryptography_and_information_security_for_open_networks

Security and privacy in Smart Cities

Responsable: Carles Garrigues, Helena Rifà 
The research around Smart Cities aims to improve the welfare and quality of life of citizens while stimulating the economic progress of cities. To make this possible, the first challenge is to have an integrated communications network that provides high capacity and capillarity, thus allowing the interconnection of all city services that generate information (street lighting, water services, waste management, transport, etc.).

Over this communication network, the second major challenge is to build an urban platform consisting of three layers: first, a layer of sensors that collects all the information of the state of the city; second, a layer of data management and generation of useful information through inference engines and systems of massive data analysis; third, an application layer that presents information to the user in a convenient format. These three layers are connected through interfaces that allow isolating the specifications of each particular technology and enable the integration of new modules to the system in a transparent manner.

The management of data and information in a Smart City involves the integration of several information systems with very different functions: storage of data through non-relational databases, processing and analysis of data using distributed computing, urban mining, social computing, natural language processing and semantic reasoning, support decision making and modelling of urban behaviour.

The enormous amount of information and communication systems involved in a Smart City logically generates important security challenges. Each of the systems involved have their inherent vulnerabilities, and their integration gives way to new potential security attacks. These new challenges are mainly related to the fields of privacy, confidentiality and availability of services. The research will focus on finding solutions to these problems on the basis of the state of the art in cryptography, privacy, anonymity, authenticity, etc.

Research group information:

http://in3.uoc.edu/opencms_portalin3/opencms/en/recerca/list/kison_k-ryptography_and_information_security_for_open_networks

Security and privacy in reality mining

Responsable: Sara Haijan, David Megías 
Reality mining refers to a set of techniques to gather information via sensors (which can be integrated into mobile devices) about environmental data concerning human behavior. These data can be used to predict certain patterns of behavior and their effects. For example, samples on the mobility patterns of a group of people can relate to certain effects that these patters can have on their health.

Despite the advantages that this information can be processed, for example, to advise individuals to take preventive measures for certain diseases, it is clear that the collection and storage of such data raises important ethical issues, such as security and privacy. It is essential that the storage and processing of information is done in a way that ensures the privacy of individuals who agree to participate in this type of study or who want to enjoy the benefits of this technology.

The project involves designing systems that allow data collection with the required degree of privacy through the use of specific cryptographic protocols, combined with data mining techniques (data mining) and managing large amounts of data (big data).

Research group information:

http://in3.uoc.edu/opencms_portalin3/opencms/en/recerca/list/kison_k-ryptography_and_information_security_for_open_networks

Computer security forensics

Responsable: Jordi Serra 
Nowadays, the techniques of information research in computer systems are known as computer forensics. There are many techniques used to find a particular type of information, looking for exactly what you want to find on drives, RAM, Phones, etc. In many cases this information is sent to trial, so the chain of custody receives an important part of this process, when the evidences are given to others, because these acquisitions are performed by C.S.I. (police) and given to computer forensics for the plaintiffs.

Moreover, the search for information on disks or RAM can often undermine the right to privacy, so you cannot open emails or files without the consent of a judge, who sometimes does not allow an email to be opened. The process of transmitting information between judicial parties (members of trail) must be secured, so they can not alter any information, whether intentionally or not.

The line of research focuses on obtaining a technology that can access the information stored on drives automatically, looking for patterns of words or unopened and unaltered information files and perform a scheme, with PKI or similar, to ensure appropriate custody between the different parties participating in the system, such as police, experts, lawyers, etc. and create a new process of transmitting information remotely.

Research group information: http://in3.uoc.edu/opencms_portalin3/opencms/en/recerca/list/kison_k-ryptography_and_information_security_for_open_networks

Jordi Serra-Ruiz: http://www.uoc.edu/webs/jserrai/EN/curriculum/index.html

Knowledge representation and reasoning

Responsable: M. Antonia Huertas
This line is related to artificial intelligence and logic. One of the goals of artificial intelligence is to develop techniques and methods that allow a computer system to solve problems intelligently. Solving such problems may require certain skills as the ability to learn and reasoning, like a human expert would. The latter capability, reasoning, is provided by the use of logical systems. Knowledge representation is more concerned with modelling than with the collection of individual facts: with establishing a framework of understanding within which the facts make sense. The key to establishing such a framework is to endow the computer with a capacity for reasoning. Hence the full title of this line: knowledge representation and reasoning.

This research line investigates modelling and representation of different forms of knowledge and reasoning, focusing on their logical foundations. Theoretical properties of knowledge representation and reasoning formalisms are investigated. In particular, knowledge-based systems, description logics, temporal and modal logics, formal ontologies or fuzzy systems, are considered.

Knowledge engineering for computational intelligence

Responsable: M. Elena Rodríguez, M. Antonia Huertas
Knowledge engineering focuses on the development of models for improving the capabilities of tools and applications in interactive environments. On the other hand, computational intelligence is a branch of artificial intelligence based on the study of adaptive mechanisms to enable smart behaviour of complex and dynamic systems. Intelligence is directly linked to reasoning and decision-making. Research in computational intelligence does not reject statistical methods, but often provides a complementary view based on logic.

This research line focuses on the use of techniques to formalize knowledge representation and reasoning about interactive systems. It covers both theoretical and practical approaches. The development of software artefacts are an integral part of this research line, with application in interactive environments that requires computational intelligence. The main application domain in technology enhanced learning, but not only.

Knowledge discovering and analytics

Responsable: M. Elena Rodríguez
Analytics consists of the measurement, collection, analysis and reporting of meaningful patterns in data, for purposes of discovering valuable knowledge in the environment in which it occurs. This research area presents many challenges in terms of scientific research as it includes the combination of different techniques related with knowledge discovery and representation. It faces many challenges in terms of scientific research and practical application as it includes frameworks, techniques and tools related with data collection, data representation, analysis and visualization for improving processes in the environment where they are applied

Authorship, Authentication and Certification in e-assessment

Responsable: Anna Guerrero, David Bañeres 
Even though online education is a very important pillar of lifelong education, institutions are still reluctant to wager for a fully online educational model. At the end, they keep relying on on-site assessment systems, mainly because fully virtual alternatives do not have the deserved social recognition or credibility. Thus, the design of virtual assessment systems that are able to provide effective proof of student authenticity, authentication and authorship and the integrity of the activities in a scalable and cost eficient manner would be very helpful.

This research line proposes to analyze a virtual assessment approach based on a continuous trust level evaluation between students and the institution.

Data mining techniques to discover knowledge in databases

Responsable: Germán Cobo 
The main objective of this line of research lies in the development of data mining tools to discover knowledge from large amounts of data, represented in multidimensional spaces and coming from different kinds of sources; e.g. design and implementation of parameter-free hierarchical clustering algorithms, which do not require any prior knowledge about the data, including the number of clusters to identify. The scope of application of these tools is wide and varied; regarding virtual learning environments, by way of example, it can include modelling of online learners' activity throughout their learning processes, predicting academic performance, dynamic scheduling for time-limited use of laboratory resources, adapting learners' educational itineraries, etc.

ICT education through formative assessment, Learning Analytics and Gamification

Responsables: Robert Clarisó and Santi Caballé
The ICT degrees include very practical competencies, which can only be acquired by means of experience, performing exercises, designs, projects, ... In addition to the challenge of motivating students to solve activities, lecturers face the problem to assess and provide suitable feedback to each submission. Receiving immediate and continuous feedback can facilitate the acquisition of the competencies, although this requires support in the form of automatic tools. The automation of the assessment process may be simple in some activities (e.g., practical activities on programming) but it may be complex in activities about design or modeling. Monitoring the use of these tools can reveal very valuable information for the tracking, management and continuous improvement of the course by the teaching team. However, in order to leverage all its potential, this information should be complemented with data from other sources (e.g., the student's academic file) and historical information of previous editions.

The main goal of this research line is to design and build a set of e-Learning tools and services to provide support to the learning process in university degrees in the field of ICT (Information and Communication Technologies). The expected benefits will have a repercussion on the students (improvement of the educational experience, greater participation and performance, lower drop-out rate) and on the lecturers, managers and academic coordinators (resources for monitoring a course, making decisions and predictions).

Taking into account these elements, the contributions will focus on three axes: - Tools for formative assessment, which can provide immediate feedback by means of automatic assessment. In particular, the research activity will focus on knowledge areas with high cognitive or modeling levels, such as the design or modeling of software and hardware. - Learning analytics that monitor the activity and the progress of the student about the use of the mentioned tools and allow for analyzing the learning results, identifying the critical points and defining actions of improvement. These analytics will also incorporate other sources of academic and historical information to facilitate the course tracking and decision making processes to the teaching team. - Gamification, as an incentive scheme in order to motivate students to perform new activities and increase their engagement without sacrificing the academic rigor.

A relevant aspect to be considered by e-learning tools developed in this research line is the modularity and independence from technologies or particular virtual campuses, with the aim to facilitate its application to different courses and contexts. To this end, the functionalities of these tools will be offered as a set of services, using appropriate standards. The tools will be evaluated in courses of mathematics, computing engineering and telecommunication and it is expected that their use becomes feasible as part of both self-taught education (life-long learning) and traditional formal education as well as massive courses of on-line learning (MOOCs).

The research conducted here will be supported by the Spanish research project ICT-FLAG: Enhancing ICT education through Formative assessment, Learning Analytics and Gamification (Ref: TIN2013-45303-P).

Emotion-awareness tools for affective eLearning

Responsables: Santi Caballé and Thanasis Daradoumis 
The next generation of distance/lifel?ng learning technologies is expected to be adaptive not only to the learners' cognitive performance but also to their affective state. However, addressing the user's feelings in Computer-Supported Collaborative Learning (CSCL) and Virtual Learning Environments (VLEs) is still in its infancy. The integration of emotion awareness (detect and express emotions) can improve the state-of-the-art of educational technologies. Unfortunately, emotion recognition tools and technologies can be invasive and obtrusive, often interrupting the learner accomplishing his/her main activity or task.

The main objective of this research line is to integrate emotion awareness into learning systems (i.e. Moodle, Blackboard) in realistic educational sessions. This objective includes: - The development of new ways to collect emotional data from different input channels (facial, voice, text input, events, etc.) by taking advantage of both traditional devices (i.e. mouse, keyboard, web camera etc.) and new devices (i.e. smart phones, tablets etc.), in order to advance and bring emotion recognition to people's daily life. - Providing high-level knowledge from low-level data (e.g. web logs), through meaningful emotion visualizations for all stakeholders (self, peers, tutors, experts), to promote learning analytics field. - Validation of effective feedback strategies with adequate positive impact on the user's both cognitive performance and emotional regulation. - The mining of new pedagogical strategies to address the presence of emotions in learning, which will promote the fields of affective learning.

The empirical studies are to be conducted in formal and informal learning settings as well as workplace learning, covering a wide age range and validating also gender differences.

Computer-Mediated Collaboration and Learning within an Adaptive, Interactive, Personalized, Emotion and Context-aware Environment

Responsables: Thanasis Daradoumis and Santi Caballé
The increasing use of social networking sites (SNS) introduces new problems including SNS addiction and cyber-bullying that interfere with school and learning. Such social network problems are based on failure to address the development of socio-cognitive and socio-emotional skills in formal school curricula. Moreover, while in the recent past, personalization has been mostly explored through learner's profiles, context-aware can enhance such system considerably by capturing not only learner's preferences but also the learner's context, group context as well as learning spaces and objects context. The aim is to provide learners with advanced and enriched information on the context where learning and interaction takes place and provide learners with a situational/context awareness.

In fact, a new pedagogy is needed based on self-regulated, experiential learning in groups where learners are supported to achieve a deeper understanding of self in relation to others. In order to contribute to the goal of building cognitive- and emotion-centered learning programs, research should focus on the investigation of how group awareness tools can be adapted to support the social regulation of cognition and emotions in learning contexts.

Self-awareness, control of impulsivity, working cooperatively, and caring about oneself and others are key factors that can lead to effective self-regulated learning and motivation regulation in distance learning environments. One needs to identify, encourage and reinforce the social, cognitive and emotional skills needed for a successful engagement in computer-mediated collaboration (CMC) and learning within an adaptive, interactive, personalized, emotion and context-aware Environment (http://www.ascd.org/publications/books/197157/chapters/The-Need-for-Social-and-Emotional-Learning.aspx).

The use of group awareness technologies is becoming necessary to circumvent the bottlenecks of CMC. Such technologies aim at analyzing users¿ characteristics and behavior and feeding that information back to the group. In CSCL contexts, group awareness tools should be designed not only to improve and expand social and cognitive processes during collaborative learning (Buder, 2011), by making explicit and visible what is not directly observable like e.g., the group members' prior knowledge (Sangin et al., 2011) or their participation level during online discussions (Janssen et al., 2011), but also to provide collaborators with information about their partner¿s affective states during online collaboration. Ultimately, we need to investigate the degree of positive impact of socio-cognitive coupled with socio-emotional awareness tools on collaborative processes and outcomes, as well as the way to provide effective and timely cognitive and emotional feedback that can help in monitoring and assessing learners¿ behavior, performance and individual progress.

Information models for enhancing security in eLearning

Responsable: Santi Caballé 
This research line aims at incorporating information security properties and services into on-line e-Learning. The main goal is to design innovative security solutions, based on methodical approaches, to provide e-Learning designers and managers with guidelines for incorporating security into on-line learning. These guidelines include all processes involved in e-Learning design and management such as security analysis, learning activities design, detection of anomalous actions, trustworthiness data processing, and so on.

This research is to be conducted by multidisciplinary perspective, the most significant are e-Learning and on-line collaborative learning, information security, learning management systems, and trustworthiness assessment and prediction models. In this scope, the problem of ensure collaborative on-line learning activities will be tackled by a hybrid model based on functional and technological solutions, such as, trustworthiness modeling and information security technologies.

Ontologies in support for affective and emotional collaborative learning systems

Responsable: Jordi Conesa
Human-computer interaction (HCI) applied to ITS (intelligent tutoring system) can be used to develop and design methodologies that are pedagogically guided and which would handle the emotional/affective systems of learning and provide the e-learning system the ability to offer more intelligent adaptive and collaborative services.

To this end, this research line focuses on ontological frameworks that include emotional information about the sentiment and opinion of students when collaborating. The use of automatic opinion mining and sentiment analysis techniques is fostered to study the opinion that a learning document expresses and determine certain sentiments felt by a student when writing an opinion in a post text, in terms of subjectivity, polarity, strength and so on.

Technology-Enhanced Assessment, Analytics and Feedback

Responsable: Maria Antonia Huertas, Enric Mor 
Technology can support nearly every aspect of assessment in one way or another, from the administration of individual tests and assignments to the management of assessment across a faculty or institution; from automatically marked on-screen tests to tools to support human marking and feedback. This research line is related to technology-enhanced assessment, and focuses on the wide range of technologies and ways in which technology can be used to support assessment, feedback and its analytics. Research topics include, but are not limited to: Design, development and evaluation of e-assessment systems, Technologies and specifications for e-assessment, Technology-enhanced assessment design, validity and reliability, Feedback generation, support and automation, Human-Computer Interaction in e-assessment and feedback, Learning and Assessment Analytics, Collection, analysis and visualization of data for e-assessment and feedback

Models, Tools and architectures for Computational Thinking

Responsables: Adriana Ornellas, Enric Mor
Computational thinking is an emerging area of study that originates from the discipline of computer science. Researchers define it as the ability to apply the strengths of computing in order to formulate problems so their solutions can be represented as computational steps and algorithms. This research line focuses on exploring appropriate models, tools and architectures to support the development of computational thinking in formal and no-formal educational settings.

Video-games and gamification in higher education learning environments

Responsable: Joan Arnedo, Dani Riera
Since the dawn of time, games have been used as an effective learning method, not just for humans, but for many living beings. Even though gaming as a learning tool tends to be associated to early development stages (childhood), and thus, labelled as a frivolous activity, this perspective has slowly shifted.

In the computers era, video-games have dethroned all other types of media, becoming an activity shared by groups of people with very different interests and ages. This research line focuses on the study of how video-games or/and game-like activities (gamification) can be embedded into the learning process in university-grade studies to improve students experience and performance.

Design and development of environments that integrate theoretical and practical learning activities in a natural manner

Responsable: David García, Carlos Monzo 
This research line aims to design and develop virtual learning environments that integrate learning activities that are both theoretical (e.g. readings, videos, audios, etc.) and practical (e.g. access to remote labs, mathematical exercises, simulations, etc.) in a natural and user-friendly way. To this end, these environments must take the idiosyncrasy of both e-learners and devices (i.e. PCs, smartphones, tablets, etc.) into consideration.

Verification of learning activities authorship

Responsable: Jose Antonio Morán, Eugènia Santamaría 
Virtual environments have many advantages, but the student is not physically present in the classroom. This fact complicates the verification of the student and the authorship of the work. There are techniques that allow the non-invasive biometric user identification that can be applied to identification in virtual learning environments. Particularly, we propose the research in the use of voice analysis and its integration with other non-invasive behavioural techniques.

Intelligent tutoring systems for learning digitals systems

Responsable: David Bañeres, Robert Clarisó 
The synthesis of digital circuits is a basic skill in all the bachelors around the ICT area of knowledge, such as Computer Science, Telecommunication Engineering or Electrical Engineering. An important hindrance in a virtual learning environment is that the student does not have the face-to-face support of the instructor during their learning process.

This research deals with the design of a unified automated framework to provide a set of self-assessment services to learn digital systems. In addition to design tools where the personalized feedback is crucial, the research also focuses on the instructor point of view giving specific information related to the analysis of the learning progress of the students.

Automated tools for software engineering

Responsable: Robert Clarisó and Daniel Riera 
Nowadays software is a core asset in companies and administrations of any size. The existence of software bugs can cause accidents and important economic losses. This is why the discipline of software engineering attempts to improve automation in the software development process. This research line is oriented towards investigating automated tools that help the quality of the final software product through the use of techniques from modelling, algorithmics and artificial intelligence.

Reuse-based software engineering methodologies for developing complex collaborative learning systems

Responsable: Santi Caballé
eLearning and in particular computer-supported collaborative learning (CSCL) needs are evolving accordingly with increasingly demanding pedagogical and technological requirements. As a result, current collaborative learning practices must be continuously adapted, adjusted, and personalized to each specific target learning group and pedagogical model. These very demanding needs of the CSCL domain have become very attractive for domain software developers and represent a great challenge for the software development research community.

This research line proposes to conduct research on advanced reuse-based software engineering methodologies and architecture solutions for developing complex CSCL applications. One key aspect will be to reuse the large number of common requirements shared by CSCL applications. The aim is to yield effective and timely CSCL software systems capable of supporting and enhancing current online collaborative learning practices.

Model-driven development

Responsable: Elena Planas, Robert Clarisó and Jordi Cabot
Model-Driven Development is a software development approach that attempts to reduce the development costs by focusing on producing software models (usually specified by UML) rather than code, and relying on tools to automatically generate the final implementation from these models.

This research line will investigate techniques and tools to support Model-Driven software development processes (model transformations, executable models, domain specific languages, ¿). The focus of this work will be on developer productivity improvements and the quality of the final software product.

Graphical formalisms and their application to computing education

Responsable: Robert Clarisó, David Bañeres 
There are many types of graphical formalisms that can be used to describe the dynamic behaviour of a system: graphs, automata, state machines, nets, activity/sequence diagrams, etc. In computing degrees, these formalisms are introduced in courses within areas such as digital circuit design, software engineering, graph theory or theoretical computer science.

This research deals with the construction of a tool infrastructure that can support features such as layout and representation of graphical formalisms, diagram animation and simulation, generation of a software/hardware implementation from the model, automated testing or evaluation of correctness. The goal is the application of these techniques to courses in the Computing curriculum in order to improve the understanding of computing concepts, facilitate the creation, visualization and exchange of graphical formalisms, and contribute to the assessment and self-assessment of students.

Software analytics

Responsable: Jordi Cabot
Software Analytics is the study of all data related to software and its engineering processes in order to better understand how software is built so that we are able to predict and improve important quality factors of software artifacts. Software analytics includes the analysis of the program code but we are interested also in the analysis of all the collaboration and social aspects around it (who is the community that builds the software, how are they organized, what best practices they follow, and so on).

Software to empower non-technical people

Responsable: Jordi Cabot
The demand of software developers is not met by academic institutions. Estimates say that the number of people who program at work largely outnumbers the number of professional programmers. As a result, there is a lot of error-riddled software being created by this so-called end-user programmers; software often in charge of important tasks and data.

Our goal will be to study how development methods and tools can be tailored to non-technical people to empower them to build the software they need to be more productive in their daily activities. This will be based on the use domain-specific languages that aim at adapting and simplifying modeling / programming languages for the specific requirements of a given community (based on the expertise, domain vocabulary and rules, role and goals of that community). E,.g, imagine helping healthcare professionals to build themselves the tools they need for a better analysis of the health information they manage.

Device to Device (D2D) communications in cellular networks:

Responsable: Ferran Adelantado

Cellular communications networks are evolving to new architectures and services. The new cellular networks standards, LTE-A (particularly release 12 and the forthcoming releases) ¿the so-called 4G and 5G-, will be characterized by high densities of nodes, many of them low-power nodes. The consequences of such densification of the network are many-fold; promising aspects such as the decrease of energy consumption and the boost of the achievable transmission rates pose, nevertheless, new challenges in terms of interference coordination (enhanced Inter-Cell Interference Coordination, eICIC), mobility, backhaul deployments, dual connectivity, etc.

In this context, the research community and 3GPP (the corresponding standardization organization) have preliminarily included in Release 12 of the standard the concept and the promising/expected scenarios for Device-to-Device communications (D2D). The emergence of the D2D communications in the LTE-A framework will gear up cellular networks to a new paradigm, where direct communications between user equipments will result in i) an improvement of the spectral efficiency, ii) higher throughputs, and iii) power saving. Despite being initially conceived as a solution for proximity services (ProSe) (e.g. for Public Safety purposes), it has been also proved to be useful to support other techniques in dense networks, such as load balance solutions. Yet, there is still a wide range of open issues to be addressed:

  1. The allocation of radio resources and the degree of cooperation with the base station (either an eNB or a small cell).
  2. Power control for the D2D communications.
  3. Opportunities posed by mmW spectrum.
  4. Mobility management.
  5. Cooperation between users (the so-called user equipments cloud) and incentives to do it (in terms of delay, throughput, energy consumption, etc).
  6. Impact of the enhanced Inter-Cell Interference Coordination (eICIC) on the D2D Communications.
  7. etc.

The offered Ph.D. position is addressed to highly motivated persons, with adequate background in the field of wireless communications, interested in working on the emerging D2D technology. Good command of English is essential and solid mathematical background will be highly valued.

Interesting References:

X. Lin, J.G. Andrews, A. Ghosh, R. Ratasuk, "An overview on 3GPP Device-to-Device Proximity Services". (available online at www.arxiv.org/ftp/arxiv/papers/1310/1310.0116.pdf).

D. Tsolkas, E. Liotou, N. Passas, L. Merakos, ¿Enabling D2D communications in LTE networks", IEEE Personal Indoor and Mobile Radio Communications (PIMRC), Sept. 2013.

J. Liu, Y. Kawamoto, H. Nishiyama, N. Kato, N. Kadowaki, "Device-to-Device Communications Achieve Efficient Load Balancing in LTE-Advenced Networks", IEEE Wireless Communications, April 2014.

F. Malandrino, C. Casetti, C. Chiasserini, Z. Limani, "Uplink and Downlink Resources Allocation in D2D-Enabled Heterogeneous Networks, IEEE Wireless Communications and Networking Conference (WCNC), April 2014.

Q. Ye, M. Al-Shalash, C. Caramanis, J. G. Andrews "Resource Optimization in Device-to-Device Cellular Systems Using Time-Frequency Hopping", IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS , Oct. 2014.

User-centered interaction design

Responsable: Enric Mor, Maria Elena Rodríguez
Human-Computer Interaction (HCI) is organized around three elements: design, technology and people. It mainly focuses on the design, development and evaluation of interactive tools and systems from a user-centered perspective. Interaction design researchers invent, design, build and test interactive systems with user experience goal in mind. Also, user experience considers all kind of users, including those with special needs and, consequently, taking accessible technologies, universal design and accessibility as one of its main goals. HCI research contributes in areas such as interaction design, interactive visualization, online learning as well as navigation and browsing interfaces. Existing techniques are improved and new approaches are developed in order to investigate user needs, capture and analyze user interaction, prototype interactivity or evaluate systems usability.

Human computer interaction and e-learning

Responsable: Enric Mor, Ana Elena Guerrero
In this research area special attention is paid to the relation of HCI to online learning: how content and design of educational resources, tools and environments impact learners. Learning experience can be advanced through greater mutually beneficial contact between learning, technology and design. In the particular case of HCI and e-learning, interactive and accessible e-learning environments, learning materials and educational tools are needed. Therefore, this research focuses on the technologies needed to make e-learning interactive and accessible, their impact on learners and the interrelationship between interaction design, accessibility and learning experience.

Tools and architectures for interaction modelling

Responsable: Enric Mor, Adriana Ornellas
Computer and information systems produce a great amount of data related with user interaction. Interaction modelling wants to measure, collect, analyze and report data about people and their contexts. The goal is to understand, optimize and better design systems and interactions. This area presents many challenges in terms of scientific research as it includes techniques related with knowledge discovery and representation, data mining and machine learning and how they relate with interaction design, navigation, browsing and visualization. User interactions are the raw material for the research and are studied from the perspective of user-centered design. Research in this area pursues to define and build a model to represent interaction with a system like, for example, a navigation map and if this model can be compared with the developer¿s conceptual model used to build the system. Ultimately, this research try to find out whether there is a gap between the user behavior and the system intended design and if that gap means usability or user experience problems. In this research area students are expected to define an interaction model, develop a support tool for building the model and evaluate the results.

Remote usability research

Responsable: Enric Mor
This research area takes as its starting point the user-centered design techniques that directly involve users adding distinctive advantages such as the opportunity to observe users as they behave naturally in their own environment and in context. User experience practitioners and users are distant, located in different places and, therefore, this leads to new challenges. The main aim is to investigate how user-centered design can be carried out remotely, find out how to adapt existing techniques and develop new techniques and tools. The goal of the research area is to work on the most important aspects of the remote user research, remote interaction design and remote usability and user experience evaluation. Remote usability research constitutes a step forward in improving the user experience of applications and services, but at the same time, it provides new challenges if compared to traditional techniques, especially in the design and evaluation of mobile applications and services.

Location Based Systems (indoor and outdoor) and context aware recommender systems

Responsable: Antoni Perez-Navarro and Jordi conesa 
Almost any action occurs in a place and in a moment of time. And this fact has taken utmost importance in recent years with the spread of mobile devices capable of providing GPS location and associated services to the point where we are. However, which new applications can these systems provide? Which are the new possibilities that Galileo will open? Which role play the user within these location based applications? What happens in indoor environments? In this research area we seek answers to these questions.

The area is focused on the advancement of location based systems (LBS) and context aware recommender systems (CARS) in outdoor and indoor environments, working on mobile devices. Basically it looks to:

1) Increase the functionalities and of LBS and CARS to make them work in indoor environments; and to focus them on specific sectors like -health. 2) Include personalization systems in the LBS user to include, in addition to time and position in the parameters that play a role in the functionality. Currently uses semantic web.

The methodology that follows is "Design and Creation." The research conducted is mainly applied research.

 

 

Contact and offices

Future students

Future students