SLS - Software and Distributed Systems Science
Since the 2000s, the number and complexity of computers, from the simple object connected to the largest data centers, have increased dramatically. Over the same period, application and system services have naturally migrated from a mainly centralized to highly distributed world, making them increasingly complex to analyze, develop, fix and maintain. To address these new challenges, software science and distributed systems have gradually matched each other.
Drawing on the skills of the research teams concerned, the SLS cluster pays particular attention to programming, software and systems engineering, both in a buried and highly distributed context, by exploring the complementary aspects of languages, models and systems. The skills, scientific expertise and projects of the teams that make up the SLS cluster make it possible to define three differentiating orientations:
- Programming languages and modeling languages for the specification, verification and development of complex software. Programming languages research focuses on the definition and implementation of new concepts in the form of domain-specific languages and language elements, with an emphasis on the energy management of software and systems (greenIT), from micro-sensors to cloud computing. Research on modeling languages focuses on the structural and functional cutting paradigms of complex systems by formalizing architectural elements, trades, patterns, styles and finally the engineering approach in which these coherent and autonomous elements can be managed.
- Distributed systems and algorithms for the possible real-time management of IT capacities both at the application level (in the development of the social and semantic web for example) and at the system level (in the development of systems for the optimization of computing capacities, memory, disk, networks, etc.). The application domains are mainly the Internet of Things and Cloud Computing.
- The systems and software addressed by the cluster are, among other things, distributed, (a)synchronous, real-time, autonomous, dynamic, ubiquitous and therefore require the development (automatic or not) of models for analysis purposes for, among other things, correction, security, and performance, including energy. Indeed, the function of a model is to facilitate the analysis of a system by simplifying and interpreting it. It is by relying on its already proven internal skills (in life modeling, discrete event systems, model-checking, proofs, time-controlled, parameterized and probabilistic models, model-driven engineering, dynamic reconfiguration of software architectures, flow modeling, software evolution modeling) that the cluster intends to meet these challenges.
Over the next few years, the ever-increasing omnipresence of IT in our daily lives, whether in terms of services (from the Web to social networks) or objects (connected or not), raises many challenges on both the software, model and system aspects. The cluster's triple competence is a strength in meeting these challenges. We can mention here some of the challenges that are addressed: How to structure software for its evolution and composition? What methods and tools for the analysis, verification and validation of complex systems? How to administer, manage, optimize, program and evolve highly distributed infrastructures in a secure and secure manner?