Shortcuts:

IMAGE: Return to Main IMAGE: RSS Feed IMAGE: Show All Jobs

Position Details: Big Data Solution Architect

Location: Milford, OH
Openings: 1

Description:

Job Description (Please provide summary of the position):   
We are looking for self-motivated Big Data architect to drive innovations on technology interoperable platforms with Hadoop (MapR, Cloudera, HortonWorks, EMR, HDInsight etc.) for enterprise customers in diverse industry verticals. Key focus areas would be real time analytics (such as Kafka, Spark Streaming), in-memory processing (Spark), Next Best Action, and Internet of Things (IoT).

Responsibilities: 
Design and implement Big Data solutions, including architecture, design and code reviews, to address business problems in various industry verticals.
Drive Proof of Concept (POC) and Proof of Technology(POT) evaluation on interoperable technology platforms
Focus on competency development on technology areas such as Hadoop & associated frameworks, Big data appliances, in-memory, NoSQL DBs (such as HBase, MongoDB, Cassandra). 
Support presales engineering activities for Big data based RFPs
Support projects/delivery teams on technical issues/solutions
Participate in external and internal branding of Big Data Thought Leadership

Qualifications: (Please list all required qualifications)   
Technical:
Must have:

Hands-on experience with the Hadoop stack - MapReduce, Sqoop, Pig, Hive, Flume, Spark, Kafka, HBase. 
Hands-on experience with related/complementary open source software platforms and languages (e.g. Java, Linux, Apache, Perl/Python/PHP, Chef, Scala).
Hands-on experience in Web Application related frameworks and technologies like Spring, Ajax, Angular JS, O-R mapping
Minimum 12+ years of solid IT consulting experience in data warehousing, operational data stores and large scale implementations
Minimum 5 years of experience in Core Java or Python or Scala.
Minimum 4 years Hands-on experience on Hadoop Technologies
Design and implementation of Big Data solutions, including leadership role in design to develop shared/reusable components.
Hands-on experience with architecting and Implementing Hadoop applications with complete detailed design of the Hadoop solution, including Data Ingestion, Data Storage/Management, Data Transformation.
Experience with Hadoop Security model consisting of authentication, service level authorization, authentication for Web consoles and data confidentiality.
Experience in designing solutions for multiple large data warehouses with a good understanding of cluster and parallel architecture.
Using Big Data technology and customer’s business requirements design and document a comprehensive technical architecture.
Analysis and documentation of source system data sources from traditional (RDBMS) and new data sources (web, machine-to-machine, geospatial, etc.)
Using business SLAs and a technical architecture calculate performance and volumetric requirements for infrastructure components
Design an architecture using cloud and/or virtualization technology
Plan and execute a technology proof-of-concept (POC) using Big Data technology
Experience in handling the Structured and Unstructured data using Big Data (Any tools, best practice & Industry trends).  

Hands on technical competences:
Java/J2EE, Linux, PHP, PEARL, Python, C, C++, Scala
Hadoop, Hive, HBase, Pig, MapReduce, Spark, Kafka and other Hadoop eco-system components
NoSQL databases – Cassandra, MongoDB, MariaDB, Couchbase
Data warehouse, BI and ETL tools
Detailed knowledge of RDBMS data modeling and SQL
AWS, Azure knowledge would be an advantage.

Non-Technical:
Strong analytical and problem solving skills.
Strong written and verbal communication skills.
Ability to work effectively under pressure with constantly changing priorities and deadlines.
Familiarity with project management and systems development life cycle processes, tools, concepts and methodologies is plus. 
Ability to work independently and as a team member.

Perform an action:

IMAGE: Apply to Position




Powered by: CATS - Applicant Tracking System