Finished PhD Thesis

The following PhD Thesis are already finished.
 

Title Author Year
New Simulation Techniques for Energy Aware Cloud Computing Systems Gabriel González Castañé 2015
Description
Providing new contributions for modelling and simulating energy-aware cloud computing systems
– The underlying architectures of cloud computing systems.
– Non-invasive strategies for simulating power consumption.
– Analysing the impact of realistic workloads on the system.

 

 

Title Author Year
A multi-tier cached I/O architecture for massively parallel supercomputers Francisco Javier García Blas 2010
Description
Recent advances in storage technologies and high performance interconnects have made possible in the last years to build, more and more potent storage systems that serve thousands of nodes. The majority of storage systems of clusters and supercomputers from Top
500 list are managed by one of three scalable parallel file systems: GPFS, PVFS and Lustre.Most large scale scientific parallel applications are written in Message
Passing Interface (MPI), which has become the de-facto standard for scalable distributed memory machines. One part of the MPI standard is related to I/O and has among its main goals the portability and efficiency of file system accesses. All of the above mentioned
parallel file systems may be accessed also through the MPI-IO interface.The I/O access patterns of scientific parallel applications often consist of accesses to a large number of small, non-contiguous pieces of data. For small file accesses the performance is dominated by the latency of network transfers and disks. Parallel scientific applications
lead to interleaved file access patterns with high interprocess spatial locality at the I/0 nodes. Additionally, scientific applications exhibit repetitive behaviour when a loop or a function with loops
issues I/O requests. When I/O access patterns are repetitive, caching and prefetching can effectively mask their access latency. These characteristics of the access patterns motivated several researchers to propose parallel I/O optimizations both at library and file system levels. However, these optimizations are not always integrated across different layers in the systems.

In this dissertation we propose a novel generic parallel I/O architecture for clusters and supercomputers. Our design is aimed at large-scale parallel architectures with thousands of compute nodes. Besides acting as middleware for existing parallel file systems, our architecture
provides on-line virtualization of storage resources. Another objective of this thesis is to factor out the common parallel I/O functionality from clusters and supercomputers in generic modules in order
to facility the porting of scientific applications across these platforms.

Our solution is based on a multi-tier cache architecture, collective I/O,
and asynchronous data staging strategies hiding the latency of data transfer between cache tiers. The thesis targets to reduce the file access latency perceived by the data-intensive parallel scientific
applications by multi-layer asynchronous data transfers. In order to accomplish this objective, our techniques leverage the multi-core architectures by overlapping computation with communication
and I/O in parallel threads.

Prototypes of our solutions have been deployed on both clusters and Blue Gene supercomputers. Performance evaluation shows that the combination of collective strategies with overlapping of computation, communication, and I/O, may bring a substantial performance benefit for access patterns common for parallel scientific applications.

 

 

Title Author Year
A generic software architecture for portable applications in heterogeneous Wireless Sensor Networks María Soledad Escolar Díaz 2010
Description
Recently, in the scope of the embedded systems, the Wireless Sensor Networks (WSN) have emerged as a promising technology. A Wireless Sensor Network fusions the physical and computational world, giving the possibility of monitoring a wide variety of environmental phenomena through devices called sensor nodes or motes.In this sense, different operating systems for sensor nodes have been proposed in the last years, to abstract away the heterogeneous hardware components
integrated into the motes, and to facilitate the writing of small programs. In spite of these operating systems, we are still very far of getting a generic and platform-independent development architecture for portable applications
among different sensor nodes:

  • There are no high-level abstractions over the operating system to facilitate the writing of applications, such as programming languages or development APIs.
  • Applications are developed in an ad-hoc fashion. Moreover, typical applications for WSN are monolithic pieces that include hardware, operating system and application itself.
  • Writing and maintaining applications for sensor networks is denitely a hard task, such as it is mentioned in the WSN literature.

This thesis proposal addresses these challenges and establishes as main objective the development of a platform-independent architecture for writing portable WSN applications,
which can be easily transported among heterogeneous hardware and software platforms. More specically, we are able to enumerate the next goals:

  • Design and implementation of a multi-layered software architecture distinguishing clearly the different abstraction levels in a sensor node: hardware, operating system
    and application level.
  • A multi-platform development framework based on the Model Driven Architecture (MDA) standard. It will allow a graphic composition of OSAL applications, installation and deployment of the automatically generated applications, simulation and deployment.
  • Evaluation of the proposed architecture in terms of resources used by the applications: footprint (RAM and ROM measurements), energy consumption, and execution compatibility between the generated applications.

 

 

Title Author Year
Sistema de ficheros paralelo escalable para entornos cluster Luis Miguel Sánchez García 2009
Description
Nowadays, the applications used in environments high performance computing, such as simulations scientific applications dedicated to data extraction (data-mining), manage large amounts of information, needing huge computing and memory resources.Cluster architecture is the most common solution for HPC applications. There are two kinds of cluster architectures: first, based on the aggregation of heterogeneous components and others, built with homogeneous components of large-supercomputers. Heterogeneous cluster architectures have a main problem, because it is built using different hardware and software technologies. There are no parallel file systems to adapt all of these diverse technologies available on these architectures. Moreover, homogeneous large-clusters have an I/O imbalance problem. This is due to the large number of compute nodes available compared to the few number of I/O nodes. This imbalance converts the I/O system on a bottleneck for HPC applications.

The most common approach to remove the heterogeneity of the clusters is the adaptation of the nodes integrating technology to allow compatibility with new systems. Moreover, in the case of large clusters, traditional solutions are the use of parallel file systems and include changes in the infrastructure of the storage system, such as increasing the number of I/O nodes. In both
cases, the solutions have high economic and time costs in the adaptation and configuration of the I/O infrastructure.

This thesis proposes a solution for the problems presented above. The goals are the following:

  • Providing uniform data access using standard I/O technologies with the purpose of constructing storage systems in heterogeneous environments.
  • Balancing effective I/O load and eliminating the overhead of storage systems in large scale environments.

 
To achieve these objectives we designed the following solutions:

  • A parallel file system platform based on the use of standard technologies for the formation of storage systems for heterogeneous clusters, providing further homogenice platform data access to applications.
  • An I/O architecture based on the extension of the diagrams of the hierarchy of memory to the large clusters environment, increasing the number of I/O nodes of the clusters to improve the parallelism and to reduce the I/O access to the storage.

This document details the proposed solutions and shows the evaluations of them.

 

 

Title Author Year
Técnicas de Inteligencia Artificial Emergente aplicadas al servicio de replicación de datos de arquitecturas Grid Víctor Méndez Muñoz 2007
Description
T�cnicas de Inteligencia Artificial Emergente aplicadas al servicio de replicaci�n de datos de arquitecturas Grid

 

 

Title Author Year
Mecanismos de incremento de prestaciones en el acceso a datos en Data Grids José María Pérez Menor 2006
Description
Mecanismos de incremento de prestaciones en el acceso a datos en Data Grids

 

 

Title Author Year
Técnicas de tolerancia a fallos en sistemas de ficheros paralelos para clusters Alejandro Calderón Mateos 2005
Description
This work introduces a fault-tolerance model for the files of a parallel file system.The main contributions of this PhD Thesis are the following:

  • A fault-tolerance model for parallel file systems that allows employ different
    fault-tolerance mechanisms at file level.
  • A model based on distribution patterns that offers a flexible and simple fault tolerance model description.
  • An analysis of the main properties of the distribution schemes resulted from
    the associated distribution patterns as defined in the proposed fault-tolerance
    model.
  • The algorithms needed to add, remove or modify the file-based fault-tolerance
    model in a dynamic way.
  • The introduction of distribution schemes based on external redundancy. Those schemes allow the dynamic addition and removal of fault-tolerance support to
    a file.
  • A POSIX extension to add, remove, modify and define the distribution schemes
    for files. The same functionality is also provided for MPI-IO through hints.

An evaluation of the proposed model has been made. For this evaluation, the
model has been implemented by using the Expand parallel file system.
This evaluation shows that, even with the natural overhead introduced by the fault-tolerant files, this overhead is low and offer parallel file system users a simple and practical solution.

 

 

Title Author Year
Propuestas arquitectónicas para servidores Web distribuidos con replicas parciales José Daniel García Sánchez 2005
Description
In this thesis a new Web server distributed architecture is proposed. The proposed architecture is based on the usage of a distributed switch and the partial replication of contents, in such a way that a high scalability can be achieved regarding managed data volume and without a reliability loss of the resulting system. Besides, content allocation may be adapted to service needs.
The proposals presented in this thesis include:

  • A new family of architecture solutions based on a Web cluster with distributed switch which satisfies the goals of partial replication and dynamic replica distribution reducing the reliability weaknesses.
  • A replica allocation algorithm making the highly accessed elements to be replicated in more server nodes than the lowly accessed elements.
  • A dynamic content replication strategy which allows to determine when content redistribution is needed and how this redistribution must be performed.
  • The adaptation of three request dispatching policies to the case of partial content replication: round robin dispatching, less loaded node dispatching and locality aware request distribution dispatching (LARD).

Evaluations have proved that the reliability of a cluster based Web system is limited by its Web switch reliability. In the same way, this thesis shows that a partial replication based system using a relatively low number of replicas offers an equivalent reliability to that of a system based on full replication, while its storage capacity is much higher.
Besides, partial replication does not affect in a negative way to the global system performance.

 

 

Title Author Year
Técnicas Arquitectónicas de Entrada/Salida para Sistemas Operativos Integrados Javier Fernández Muñoz 2004
Description
In this thesis we propose an I/O architecture for integrated systems, those which are prepared to serve real-time clients and regular clients altogether.The proposed architecture is composed of two main components:

  • A multipolicy disk scheduler.
  • A multipolicy cache manager.

The disk scheduler proposed in this thesis contains several request queues grouped in two levels. The first level contains a queue for each kind of requests involved. The second level only have one queue that is in charge of getting the chosen requests from the first level and send them to the disk. The disk scheduler algorithm proposed involves sorting and prior discarding techniques. This allows that the number of served requests for each kind could be proportional lo the amount of resources reserved. Furthermore the algorithm takes care of the request deadline, where the requests have temporal requirements.

The cache manager proposed in this thesis uses several replacement lists to manage the blocks, one list for each kind of requests. This allows to configure the cache behavior for each kind of task, The cache algorithm selects the replacement list where the following block would be extracted.

Moreover, two new specialized cache algorithms aimed for multimedia streams have been proposed. These algorithms can be included in the multipolicy cache manager. An interval-based algorithm is proposed, aimed for constant-bit-rate streams. In this thesis we prove analytically that this algorithm reaches the maximum performance for a system with only one disk. An algorithm based on a cycle-guided block replacement is also proposed, aimed for variable-bit-rate streams. Both algorithms has been adapted to improve its performance in a system with several disks

The evaluation performed shows that the proposed solution can keep the relation between the reserved resources and the relative performance obtained for each kind of requests That goal is reached without forgetting the global performance of the whole system. Furthermore, it shows that the proposed system is flexible enough to work with any configuration.

 

 

Title Author Year
Arquitectura multiagente para E/S de alto rendimiento en clusters María de los Santos Pérez Hernández 2003
Description
Arquitectura multiagente para E/S de alto rendimiento en clusters

In progress PhD Thesis

The following PhD Thesis are being developed under TEALES project.
 

Title Author Year
Collective I/O Techniques for Chip Multiprocessor Clusters Rosa Filgueira Vicente 2010
Description
I/O operations are an important limiting factor in order to achieve high performance in Chip Multiprocessor (CMP) clusters. As far as we know, there are no I/O techniques specially tuned for CMP clusters. The main purpose of our research is to develop I/O techniques for these kind of architectures. We propose several strategies that allow us to reduce both the number of I/O requests and the transferred data volume among processes. This way, an improved overall system performance in CMP clusters is achieved.

 

 

Title Author Year
New strategies for characterizing and improving high performance I/O architectures Alberto Núñez Covarrubias 2011
Description
Nowadays, cluster and grid computing are increasing its role due to fast evolution on computer networks and communication technologies. It entails the need of storing and managing huge amounts of data efficiently. Storage subsystem performance is one of the major concerns that arise on this kind of large computing networks. The I/O system is usually a system bottleneck in most of the computing systems. Detecting the cause of the problem could be an easy task on a single computer or a small network, but detecting the problems and their causes in a large computing network is not a trivial task.The major goal of this dissertation is to identify and discover strategies to improve the performance of large storage networks, their scalability, efficient resource management, etc. To perform those tasks, we have developed a parallel simulator called SIMCAN. Using parallel simulations we are not limited to the resources that a single computer can supply using sequential models. The main goal of SIMCAN is to simulate large complex storage networks. Moreover, with this simulator, high performance applications can be modelled in large distributed environments. Thus, SIMCAN can be used for evaluating and predicting the impact of high performance applications on overall system performance.

 

 

Title Author Year
New techniques to model energy-aware I/O architectures based on SSD and hard disk drives Laura Prada Camacho 2012
Description
For years, performance improvements at the computer I/O subsystem and at other subsystems have advanced at their own pace, being less the improvements at the I/O subsystem, and making the overall system speed dependant of the I/O subsystem speed.One of the main factors for this imbalance is the inherent nature of disk drives, which has allowed big advances in disk densities, but not so many in disk performance. Thus, to improve I/O subsystem performance, disk drives have become a goal of study for many researchers, having to use, in some cases, different kind of models. Other research studies aim to improve I/O subsystem performance by tuning more abstract I/O levels. Since disk drives lay behind those levels, real disk drives or just models need to be used.

One of the most common techniques to evaluate the performance of a computer I/O subsystem is found on detailed simulation models including specific features of storage devices like disk geometry, zone splitting, caching, read-ahead buffers and request reordering. However, as soon as a new technological innovation is added, those models need to be reworked to include new characteristics, making difficult to have general models up to date.

Our alternative is modeling a storage device as a black-box probabilistic model, where the storage device itself, its interface and the interconnection mechanisms are modeled as a single stochastic process, defining the service time as a random variable with an unknown distribution. This approach allows generating disk service times needing less computational power by means of a variate generator included in a simulator. This approach allows to reach a greater scalability in I/O subsystems performance evaluations by means of simulation.

Lately, energy saving for computing systems has become an important need. In mobile computers, the battery life is limited to a certain amount of time, and not wasting energy at certain parts would extend the usage of the computer. Here, again the computer I/O subsystem has pointed out as field of study, because disk drives, which are a main part of it, are one of the most power consuming elements due to their mechanical nature. In server or enterprise computers, where the number of disks increase considerably, power saving may reduce cooling requirements for heat dissipation and thus, great monetary costs.

This dissertation also considers the question of saving energy in the disk drive, by making advantage of diverse devices in hybrid storage systems, composed of Solid State Disks (SSDs) and Disk drives. SSDs and Disk drives offer different power characteristics, being SSDs much less power consuming than disk drives. In this thesis, several techniques that use SSDs as supporting devices for Disk drives, are proposed. Various options for managing SSDs and Disk devices in such hybrid systems are examinated, and it is shown that the proposed methods save energy and monetary costs in diverse scenarios. A simulator composed of Disks and SSD devices was implemented. This thesis studies the design and evaluation of the proposed approaches with the help of realistic workloads.

 

 

Title Author Year
Methods to enhance content-distribution for very large scale online communities Juan Manuel Tirado Martín 2013
Description
The surge of the web 2.0 has encouraged an increase in the number of users interacting with systems through the Internet. These systems provide users with new capabilities in terms of sharing, adding and consuming information. This number of
users and the amount of content they demand and consume grows in a steady manner. This fact, makes uncertain the future of these systems and brings the scientific community new challenges in terms of scalability and quality of service.
Systems such as YouTube, Facebook, Twitter and Flicker among others, have demonstrated to offer a great variety of opportunities. Users contribute to the system by increasing the amount of available content and related information. This content
is heterogeneous: videos, photos, news, music, comments, reviews, etc. There is a clear interaction between the users and the system and among the users themselves.
Recent studies address the importance of understading these forms of interaction. All these studies target to create a solid theory explaining users interaction, and agree in the benefits of applying it to existing and future systems. Although there is a consensus about the benefits of applying social knowledge in order to improve performance, there is a gap between the theory and the definition of methods to exploit it. This gap is even bigger when we talk about how to apply these methods to real systems.
The growing size of current systems brings the client/server paradigm to its limits, and even suggests the unfeasibility of continuing to use it as we know it in such a large systems. By contrast, paradigms such as P2P, have demonstrated to be extremely efficient in large distributed applications. Using this technology could suppose a clear improvement of the quality of service in the long term. To the best of our knowledge, the idea of combining P2P technology exploiting the existing social
knowledge of these systems have not been deeply studied.
This Ph.D. proposes the study and design of methods to enhance massive and heterogeneous content-distribution systems supporting very large scale online communities. These methods are intended to improve their global performance, and help to define a more solid theory about the explotation of social knowledge. The proposed methods include aspects such as system organization, community discovery or system evolution prediction.

 

 

Title Author Year
High-performance and fault-tolerant techniques for massive data distribution in online communities Daniel Higuero Alonso-Mardones 2013
Description
In the recent years, the amount of information produced and consumed has experienced a spectacular growth. New Internet applications such as Social Networks, Web 2.0 and User-Generated Content Networks have contributed to increase the amount of information available on the Internet. The increase in available data has not been matched by a corresponding improvement of the network connectivity. In fact, the limitation factor is not the available bandwidth, but the ratio between the available bandwidth and the amount of data to be distributed or consumed. For residential users, the available bandwidth is usually small, whereas for enterprise users is the amount of information which limits the communication.Technological advances have also contributed to modify the behaviour of users and systems. New technology allows enterprises and scientists to solve problems with finer level of detail. An increase in detail, usually leads to an increase in the amount of information produced by applied algorithms. For residential users, the evolution in consumer electronics such as: digital cameras, video recorders and multimedia devices contributed to increase the size of the multimedia content.

Merging these two ongoing situations there is a stringent need for systems that can efficiently distribute content to their users, and are able to evolve in terms of capacity and processing capabilities following the evolution of the behaviour in their user community.

This PhD proposal is focused on defining a new architecture for the distribution of huge data sets based on the publish/subscribe paradigm with intelligent components. The research effort will be distributed into two main areas: social knowledge and user/environment constraints. The study of the user community will provide information regarding access patterns, content popularity, etc. This social knowledge will be leveraged by the system, in order to dynamically adapt different parameters such as mirror provisioning or content replication. Additionally, user and environment constraints such as quality-of-service or available bandwidth, will be taken into account by the architecture by addressing challenging issues such as download and notification scheduling, download priorities, etc.

 

 

Title Author Year
High Performance Data Access in Large-Scale Distributed Systems Borja Bergua Guerra 2014
Description
A great number of scientific projects need supercomputing resources, such as, for example, those carried out in physics, chemistry, pharmacology, etc. Most of them generate, as well, a great amount of data; for example, a some minutes long experiment in a particle accelerator generates several terabytes of data.In the last years high-performance computing environments have evolved towards large-scale distributed systems such as Grids, Clouds, and volunteer computing environments. Managing a great volume of data in these environments means an added huge problem since the data have to travel from one site to another through the internet.

In this work a novel generic I/O architecture for large-scale distributed systems used for high-performance and high-throughput computing will be proposed. This solution is based on applying paralell I/O techniques to remote data access. Novel replication and data search schemes will also be proposed; schemes that, combined with the above techniques, will allow to improve the performance of those applications that execute in these environments. In addition, it will be proposed to develop simulation tools that allow to test these and other ideas without needing to use real platforms due to their technical and logistic limitations. An initial prototype of this solution has been evaluated and the results show a noteworthy improvement regarding to data access compared to existing solutions.