Conflux is a unified big data integration platform with the flexibility to integrate with many technologies, and the accelerators to help build value-generating business applications swiftly.
Conflux is designed with one goal in mind- to make it simple for its users to adopt Hadoop, Storm, and Spark for data at rest (batch processing), real-time (in-motion), and for in memory processing respectively. It is powered by a highly scalable and optimized data processing engine. The platform is architected to coexist and interact with your existing enterprise infrastructure via standardized REST interfaces.
Highlights of the Conflux Platform
- Batch/Hadoop support - Perform ETL processing at scale on Hadoop
- Streaming/Storm support - Complex Event Processor (CEP) on Storm
- Fast Cluster computing - In memory processing on Spark
- Fully functional UI with a Designer and with Monitoring capabilities
- Users can design workflows, schedule jobs, execute and, monitor runs
- Technologies supported - RDBMS, Files, HBASE, NoSQL, and others
- Messaging - Kafka, RabitMQ
Benefits of using our Platform
- Cost Savings - Staff need not learn MapReduce or other coding to implement the integration. The platform through visual workflow, abstracts out all the coding needed
- Productivity Improvement - Time saved in designing and running the processing jobs leveraging the platform, versus writing time-consuming custom code and scripts
- Scale and Complexity Management - The platform provides an extensive set of ever-growing connectors to data sources, CRM and other such systems, and manages version compatibility across the processing environments like Hadoop and Storm
- Speed of Implementation - Templates and accelerators already built-in for key verticals and business processes help your organization get started on data integration workflows very quickly, and enables users to easily refine any workflow to move from experimentation to production.