Floe

From PgroupW

Jump to: navigation, search

Floe — Adaptive Framework for Dynamic Applications on the Cloud

x

Traditional scientific workflows deal with static structures and processing data in batch mode. However, the emerging applications require continuous operation over dynamic data and changing application needs. This motivates the need for data flow programming frameworks that can adapt to changes to the application structure, data feeds and speeds, latency requirements with minimal interruptions to the flow of results. In addition, the advent of elastic platforms such as Clouds also required the execution model of these frameworks to adapt to dynamism in the infrastructure. Floe is an adaptive, data flow framework designed for such dynamic applications on Cloud platforms. Floe provides programming abstractions that support traditional data flow and stream processing paradigms, while allowing dynamic application recomposition, changes to streaming data sources at runtime and leveraging elastic Cloud platforms for optimizing resource usage.

Design Primitives

The Framework consists of 4 separate Java processes each having a very specific functionality:

  • Coordinator
  • Manager
  • Container
  • GlobalHealthMonitor

The Workflow or Streaming Application could be represented as a Floe Graph where each node corresponds to the Computation Unit and Edge refers to the type of Communication between nodes. The Communication between nodes could be either Pull/Push model.


Each Application has a Single Coordinator which takes the responsibility in initializing the Computation Units over the Resources Available as mentioned in the Floe Graph. The Floe Graph contains information about the number of computation units of the specified type required in the specific application and the resource requirement for each of the computation units. However the actual Resource Allocation is handled by an Entity called Manager.

The Manager is responsible for finding out if there are enough resources to accommodate the Computation Units and takes necessary actions to handle the resource needs. The framework currently provides support to create new VM's on Eucalyptus Cloud Infrastructure. Eucalyptus provides REST calls to perform various Cloud related operations. Hence the Manager creates new resources if current resources fall short.

Even though we specify the number of computation units required for the application, it essentially does not refer to the maximum number of Computation Units required for the Application. It only refers to the minimum number of Computation Units required. As Streaming applications are dynamic in nature there should be some mechanism to alter the resource requirements as per the application needs. The GlobalHealthMonitor gathers information from each Container and makes decision about when and by how much the resource has to be increased or decreased. The GlobalHealthMonitor finds out how many Computation Units needs to be increased/decreased and sends signal to the Container.

x

The Computation Units are monitored by an entity called Container. Every machine or VM instance will have exactly one Container. The Container is responsible for all the control related to the Computation Units. In order to make use of the Multiple Cores available in these machines as well as considering that the Computation Unit could use multiple cores to perform its operation. The coordinator creates entities called as Flake which can hold the actual Computation Unit. Each Coordinator can hold multiple Flakes depending upon the resource available with the VM(Free Cores, Memory). Each Flake can hold multiple Computation Units (which we refer to as Pellets ). The pellet logic is specific to the application at hand. Flakes are responsible for exchanging information between other Flakes present within a container or at a different Container.

Each Flake contains a Buffer which store the incoming messages and invokes the Pellet to process the message, and sends the processed message to the subsequent Flakes. The communications channels could be Pull or Push, that is Flake could push the message to subsequent Flakes or a Flake could Pull a Message from the specified Flake. Likewise the Channel could be Multiplexed or Round-Robin, where every Message is sent to every connected Flake or a Message is sent to only one Flake to maintain the load.

x

The Coordinator just specifies the requirements of the Pellets ( Eg: Number of Cores required, Memory Requirements, etc ) and the Manager places them in the Container which best matches the requirements. The

Prototype Implementation

The current implementation uses REST API's to enable communication between Coordinator, Manager and Container. Inter Flake communication happens though TCP Sockets. We have leveraged the JAVA Multicore capability to facilitate the Pellet to make use of multiple cores to perform its functionality.

Ongoing Work

The current prototype implementation works on a set of known resources at hand. We are working towards enhancing the resourcing allocation feature which can use the elasticity of Cloud to register new VM's whenever a need arises.

Downloads

Team

  • Alok Kumbhare
  • Charith Wickramaarachchi
  • Yogesh Simmhan
  • Sreedhar Natarajan (graduated)
Personal tools