Apply now »

Lead Developer-D&BA

Req ID:  52277
Location: 

Bangalore, IN

Location: Bangalore, India 

 

Sapiens International Corporation (NASDAQ and TASE: SPNS) is a leading global provider of software solutions for the insurance industry, with a growing presence in the financial services sector. We offer integrated core software solutions and business services, and a full digital suite for the property and casualty/general insurance; life, pension, and annuities; and reinsurance markets. Sapiens also services the workers’ compensation and financial and compliance markets.

Our portfolio includes policy administration, billing, and claims; underwriting, illustration and electronic application; reinsurance and decision management software. Sapiens’ digital platform features customer and agent portals, and a business intelligence platform. With a 30-year track record of delivering to more than 600 organizations, Sapiens’ team of over 4,000 employees operates through our fully-owned subsidiaries in North America, the United Kingdom, EMEA, and Asia Pacific. For more information: www.sapiens.com.

 

Sapiens Digital team is developing Sapiens' next generation of insurance online services, enabling insurance companies to accelerate their digital services to customers and agents. The Digital team is focused on developing online capabilities for insurance companies on any platform, mobile, and web covering a wide range of self-service, commerce solutions aimed at optimizing the customers’ experience during engagements with their insurance companies & agents via digital channels.


For more information about Sapiens: https://www.sapiens.com/solutions-categories/data-and-digital/

 

  • B.E (or equivalent)
  • Extensive hands-on experience in Java development, including strong knowledge of core Java concepts, data structures, and algorithms.
  • In-depth understanding of distributed data processing frameworks like Apache Spark, with specific expertise in Databricks.
  • Proficiency in designing and building data pipelines for data extraction, transformation, and loading (ETL).
  • Familiarity with big data technologies and concepts, including Hadoop, Hive, and HDFS.
  • Proven experience in building scalable and high-performance data solutions for large datasets.
  • Solid understanding of data modelling, database design, and data warehousing concepts.
  • Knowledge of both SQL and NoSQL databases, and ability to choose the right database type based on project requirements.
  • Demonstrated ability to write clean, maintainable, and efficient Java code for data processing and integration tasks.
  • Experience with Java libraries commonly used in data engineering, such as Apache Kafka for streaming data.
  • Extensive hands-on experience with Databricks for big data processing and analytics.
  • Ability to set up and configure Databricks clusters and optimize their performance.
  • Proficiency in Spark Data Frame and Spark SQL for data manipulation and querying.
  • Understanding of data architecture principles and experience in designing data solutions that meet scalability and reliability requirements.
  • Familiarity with cloud-based data platforms like AWS or Azure.
  • Problem-Solving and Analytical Skills:
  • Strong problem-solving skills and the ability to analyse complex data-related issues.
  • Capacity to propose innovative and efficient solutions to data engineering challenges.
  • Excellent communication skills, both verbal and written, with the ability to convey technical concepts to non-technical stakeholders effectively.
  • Experience working collaboratively in cross-functional teams, including Data Scientists, Data Analysts, and business stakeholders.
  • A strong inclination to stay updated with the latest advancements in data engineering, Java, and Databricks technologies.
  • Adaptability to new tools and technologies to support evolving data requirements.

 

Apply now »