S-CASE Blog | S-CASE Development fuel booster

We have just finished hosting the 4th meeting of S-CASE here in Athens and feel excited to have a pilot contributing to our big and exciting development goals.

Not only that, the project and consortium itself has been a tank of inspiration, feeding us with new ideas and food for thought. As the first year comes soon to a wrap and after a lot of creative preparations, our pilot has now a clear positioning and connection to the Watchtower API’s offering, which will be soon available for use from our clients. The API’s environment is designed in a way that the Watchtower platform becomes the backend analytics for the customers, who need automated intelligence in their systems without having to develop their own analytics infrastructure. This way they go fast, low cost and with reliability, since the services consumed have been already tested and in production.

The vision of S-CASE is to provide tools for developers, along with the underpinning technologies that will support the insertion of rough system requirements in a variety of structured, semi-structured or unstructured formats for seamlessly generating draft software prototypes that will form the basis for complete software development.

S-CASE is seen as a rapid prototyping realm aiming at providing automated solutions for (a) the extraction of system specifications and low-level architecture, and (b) the discovery and synthesis of composite workflows of software artefacts from distributed open source and proprietary resources that fulfil the inserted system requirements in the best possible way.

Through the innovations it introduces, S-CASE is expected to have a significant impact on the reduction of the time that is required between the conceptualisation of a software system and its first prototype, thus improving the SE process in terms of development costs.

The S-CASE pilot – Unleashing the power of WISE


The power of the Watchtower lies to a big extend in our analytics engine called WISE (Watchtower’s Intelligent System Engine). Inside WISE lives all the automatic orchestration that takes of the shoulders of energy management teams the “discovery” part and lets them focus on their operations. The pilot will actually take our API’s one level down from today, inside the WISE. That means that our partner and customer developers will be able to create their own orchestrations instead of only using the ones that WISE exposes. This is taking us closer to a data analytics as a service model, where different customer needs and discard data sets can be analysed as customised as needed.

The Watchtower SAAS technology stack is an MVC architecture using javascript from end to end (MySQL, Express.js, Angular.js, Node.js) with the combination of WISE core functionality developed in Java. Every rule of the WISE core is being exposed with RESTful web services, creating an ecosystem of intelligence that depending on the functionality needs, separate orchestrations of the web services are being composed. The S-CASE platform like a fuel booster, will rapidly improve our productivity in developing these new web services and the development of new intelligent orchestrations inside WISE.

S-CASE Blog | The S-CASE concept

In our latest blog entry, project technical coordinator, Kyriakos Chatzidimitriou, takes us through the world of S-CASE, highlighting the project components and demonstrating how S-CASE will be realised.

The S-CASE project is about semi-automatically creating RESTful Web Services through multi-modal requirements using a Model Driven Engineering methodology. The world of web services is moving towards REST and S-CASE aims at facilitating developers implementing such web services by focusing mainly on requirements engineering. The figure below depicts the basic components and basic flow of events/data in S-CASE.


In order to better understand the practical application of the S-CASE solution, let’s take a look at a typical use case example.
Through the S-CASE IDE the user imports or creates multi-modal requirements for his/her envisioned application. The requirements may be:
  • Textual requirements in the form “The user/system must be able to …”,
  • UML activity and use case diagrams created in the platform or imported as images,
  • Storyboards for flow charting, and
  • Analysis class diagrams to improve the accuracy of the system to identify entities, their properties and their relationships.

The requirements are then processed through natural language processing and image analysis techniques in order to extract relevant software engineering concepts. These are mainly the identification of RESTful resources, their properties and relations and Create-Read-Update-Delete (CRUD) actions on resources. All these concepts are stored in the S-CASE ontology.
The above procedure also identifies action-resource tuples that can be created automatically by the system like the action-resource “create bookmark” (automatically built) or others that need more elaborate processes like “get the weather given geolocation coordinates” (semi-automatically build or composed). The latter are send into the Web Services Synthesis and Composition module.

The Web Services Synthesis and Composition module tries to synthesize elaborate processes by composing 3rd party web services into a single S-CASE composite web service. To perform such a computation, S-CASE provides a methodology for semantically annotating 3rd party web services using S-CASE domain ontologies, so that they can later be matched to the requirements of the composite service. The composite service is deployed to the YouREST deployment environment and registered in the directory of S-CASE web services for future reference and re-use.

Upon completing the stages above, the model driven engineering procedure initiates. The first step is to create the Computational Independent Model (CIM) out of the S-CASE ontology. The CIM contains the bare minimum information needed to scaffold a REST service that adheres to the requirements imposed by the user, i.e. it includes all the problem’s domain concepts.

After that model transformations take place transforming the CIM into PIM (incorporate design constraints, but platform independent) and PSM (Add support for implementing the PIM into a specific suite of software tools like: java, jax-rs, hibernate, json, jaxb, postgresql etc.). The final step is to automatically generate the code of the web service. Calls to composite services are wrapped inside the generated code. The code is build and deployed to YouREST for others to use.

In order to support software re-use, every software artefact created from this procedure is stored into the S-CASE repository for future retrieval.

Through S-CASE we plan to develop an ecosystem of services, along with the appropriate tools for service providers to develop quality software for SMEs with an affordable budget.

S-CASE Blog | Natural Language Processing

We recently got another research paper on our work in S-CASE accepted  at a conference on natural language processing. The accepted paper describes our efforts on improving a parsing model that can  automatically map software requirements written in natural language to formal representations based on semantic roles.

State-of-the-art semantic role labelling systems require large  annotated corpora to achieve full performance. Unfortunately, such  corpora are expensive to produce and often do not generalise well  across domains. Even in domain, errors are often made where syntactic  information does not provide sufficient cues. In this paper, we mitigate both of these problems by employing distributional word representations gathered from unlabelled data. The rationale for this  approach lies in the so-called distributional hypothesis by Zellig Harris, which states that words that occur in the same contexts tend  to have similar meanings.

While straight-forward word representations  of predicates and arguments have already been shown to be useful for  semantic analysis tasks, we show that further gains can be achieved by composing representations that model the interaction between predicate  and argument, and capture full argument spans.

S-CASE Blog | S-CASE & ETICS: Infrastructure Pilot Application

By using S-CASE tools, Engineering will develop ETICS Build and Test Environment Manager (BTE Manager).

ETICS BTE is one of the three pilot applications, foreseen by the project, which will help partners validate and evaluate the S-CASE paradigm. ETICS (etics.res.eng.it) is a system which automates and improves the execution of builds and tests and verifies the quality of the software produced. It is especially conceived for distributed, multi-language and multi-platform software.

A build/test lifecycle cannot exclude the management of the environment where builds and tests are performed. In ETICS, at the moment, this is done manually by the developer or tester. The BTE manager, developed by using S-CASE, will be able to create/modify/delete, accordingly to users’ needs, the Virtual Machines of a certain Virtual Infrastructure. It will also optimize the resource consumption by reusing inactive virtual machines for specific build sessions and by quickly disposing useless machines. User’s will no longer need to manually create the needed virtual machines, profile them (adding, for example, Application Servers, Containers etc), tune the environment (e.g. managing the IP addresses, DHCP Servers etc) and, after the development or tests, dispose the whole environment.

Introduction & Overview to ETICS

ETICS (E-infrastructure for Testing, Integration and Configuration of Software) is a system which automates and improves the execution of builds and tests. It is especially conceived for distributed, multi-language and multi-platform software and provides meaningful measurements of the overall software quality. ETICS consists of a build and test execution system, offered as web-services, able to exploit distributed computational resources and a plug-in mechanism for integrating engineering tools, to design, maintain and monitor build and multi-node testing scenarios.

A complete build/test lifecycle cannot exclude the management of the environment where builds and tests are performed. Currently ETICS don’t bother of this aspect, which is manually performed by the developer or tester. In particular he/she has to create the needed virtual machines, profile them (adding, for example, Application Servers, Containers etc), tune the environment (e.g. managing the IP addresses, DHCP Servers etc) and, after the development or tests, dispose the whole environment.

The automation and the optimization of that process would be a very good improvement for Engineering’s development process. For this purpose a good solution may be a web service that receives as input the features of the requested environment and creates/modifies/deletes according the Virtual Machines of a certain Virtual Infrastructure. It may optimize the resource consumption by reusing inactive
virtual machines for specific build sessions and quickly dispose useless machines.

BTE Manager defines the needed VMs on the basis of the request received from ETICS
and of the current status of the requested environment (e.g. is it already present? Do we need new Vms? Do we need to destroy some of the VMs?). In some cases it tries to optimize the resource consumption by reusing virtual machines belonging to different environments. Obviously such an optimisation is not always appropriate: for example a test environment, where performance evaluation is probably requested, cannot be built with machines shared with other experiments.

We can distinguish two main flows: one for creation and management of virtual environments and one for disposal. Each flow in turn is divided into flows related to the specific environment: build or test.

For example let’s consider the flow concerning the creation of a virtual build

1. ETICS send the request that is received by the RESTful interfaces
2. The request is a creation
3. The requested environment is build
4. The module checks if the same virtual environment exists: in that case the requested modifications will be evaluated, otherwise the request will be considered a simple creation
5. The module evaluates if there is any active VM belonging to other build environments, that can be used also for this build activity. For example a machine containing a web server may also host a DBMS
6. The exact number and features of needed VMs is evaluated, the appropriate commands are generated and sent to the target Infrastructure
7. The response is parsed and a final response is sent to Etics.

The steps requested by the other use cases are very similar: the only remark is that the creation or disposal of a test environment does not include the re-use (or the preservation) of VMs requested in other operations.

Build and Test Environment Manager

ETICS Build and Test Environment Manager will be built by S-Case. It will expose some web services to interact with ETICS and will use some web services to interact with the Virtual Infrastructures.

The exposed web service should take as input the features of the requested
infrastructure. At high level it should provide the following methods:

• Create test environment
• Create build environment
• Modify test environment
• Modify build environment
• Dispose test environment
• Dispose build environment

Every build or test environment should be identified by an ID related with the name of the ETICS project: every management operation on a certain environment can be transparently performed by ETICS using the ID and a set of parameters

The central part of the activity, concerning the BT Environment Manager, describes the logical steps needed to generate the final request for the infrastructure: these functionalities are provided by the Core. In particular it decides if:

• The build or test environment requested exists
• Some VM have to be added or removed
• Some VM can be reused to optimize the resource consumption

These operations may involve matchmaking algorithms and produce a request for a specific web service managing a specific Virtual Environment (for example, Windows Azure).

The web client sends the request, that can be a SOAP or REST request with the operation that the Virtual Environment Manager must perform to create, delete or adapt the Environment for the operation requested.



• Must translate the requests from ETICS in commands for the Virtual Infrastructure
◦ This could be accomplished by using a local database
◦ As an alternative by talking with the Infrastructure
• Must know the software installed on each Virtual Machine
• Must be able to create/delete/modify with granularity from environment to single machines Interfaces
• Must expose RESTFul API by which ETICS will be able to request the environments
• It should be possible to obtain information on active environments through RESTFul API
• Must interact with Infrastructure Management API of a set of infrastructures

Third party Services

The Module to be produced will be able to interact with different Virtual Infrastructures. Actually CLOE, Engineering Virtual Infrastructure, will be the most important one, but, since in several use cases Engineering developers need to use build and test environments on other Virtual Infrastructures, it will be very useful to support the most widespread.

A first list is the following:

• Amazon
• OpenStack
• Microsoft Azure.

These Virtual Infrastructures (IaaS) exposes APIs enabling to manage their Virtual Machines. The management of the software on the VMs is more complex and, in general, depends on the VMs.

• Must keep track of the features of the generated environments

A complete software to manage Virtual Infrastructures and to profile Virtual Machines is Foreman (http://projects.theforeman.org/): it exposes a complete set of RESTful APIs to interact with several Virtual Infrastructures (including those in the list) and to profile
the machines.

Foreman’s RESTful API should be added to the list above in order to have a complete and flexible approach to manage several Virtual Infrastructures.

S-CASE Blog | Towards the design of user friendly search engines for software projects

In our latest blog S-CASE Project Coordinator, Andreas Symeonidis, takes us through the world of search engines for software projects as we take a deep dive into making software development more user friendly.

Usually, when developers begin the planning and designing of software they find they have too few appropriate tools that enable them to reuse the optimal set of functional requirements and well engineered software modules which satisfy said requirements.

In Toward the design of user friendly search engines for software projects we suppose that were such tools available and this information properly stored, developers would be able to access other Software Engineers’ solutions to similar projects and could reuse them as off-the-shelf components, or could adjust them to their own needs.

Taking this argument one step further, one could argue that such “search engine: for software projects could be interactive, allowing users to progressively identify the required software contructs, and adaptable, in order to increase its knowledge base. Question Answering (QA) systems could provide the means to realise such search engines, given that they illustrate these types of features.