1130+ Hadoop Training in Bangalore

Tell us more about your requirements so that we can connect you to the right Hadoop Training in Bangalore
    Bangalore
    • Bangalore
    Select your city
    • Big Data
    • Hadoop Spark
    • Hadoop
    • Hadoop Administration
    • Ansible
    • Decision Tree Modeling Using R
    • Comprehensive MapReduce
    • Comprehensive Hive
    • Comprehensive HBase
    • Apache Cassandra
    • Apache Ambari
    • Apache Hadoop
    • Apache HBase
    • Apache Mahout
    • Apache Pig
    • Apache Splunk
    • Apache Storm
    • Apache Kafka
    • Apache Solr
    • Apache Spark
    • Talend
    • Google Cloud Platform Big Data & Machine Learning Fundamentals
    • Graph Databases with Neo4j
    • Impala
    • Others
    Select technology
    • Classroom
    • Online
    Select mode of training
    • Individual
    • Organisation
    Select training for
    • No Preference
    • Weekdays (Morning)
    • Weekdays (Evening)
    • Weekends
    Select timing preferences
    Share your contact details to proceed
    +91
    Enter mobile number
    Enter your name
    Enter your email
    Time preference for calls
    • Any time
    • 9AM - 12PM
    • 12PM - 3PM
    • 3PM - 6PM
    • 6PM - 9PM
    Do you want to add more details about your requirement to send to the service providers? Add a comment for your requirement
    Enter additional information

    You have successfully posted your need

    You have successfully posted your requirement
    We're working to find matched experts for you. We'll SMS / email you the info shortly.

    × 1800-3000-3232 Feedback
    Help & Feedback

    Hadoop Training in Bangalore as on Dec 13, 2017

    Filter by: RESET
    • Type of training
    • Technology
      + more
      • - less
    • Mode of training
    • Locality
      + more
      • - less
    1. DVS Technologies, Marathahalli Colony

      135 Reviews 8.3 Sulekha Score
      +91 80 48066750
      Hadoop Training, Big data & Hadoop training
      About
      We leverage our state-of-the art training room facilities and more than a decade of domain experience and expertise to provide niche technology training services. We have trained and certified over 1000 IT professionals in Big data Technologies like Hadoop, Map Reduce, Hive, Pig, Hbase, Sqoop, Flume, Oozie, Zookeeper, Spark etc.,
    2. +91 80 48100596
      Hadoop Training, Business intelligence & analytics training
      About

      Our dedicated Trainer/Consultants have extensive, successful corporate experience in successfully delivering effective business solutions.

    3. Gensoft IT India, Jayanagar

      55 Reviews 6.9 Sulekha Score
      +91 80 48067108
      Hadoop Training, Informatica training
      About
      Hadoop Administration training from Gensoft IT India provides participants an expertise in all the steps necessary to operate and maintain a Hadoop cluster, i.e. from Planning, Installation and Configuration through load balancing, Security and Tuning.
    4. +91 76003 00006
      Hadoop Training, Business intelligence & analytics training
      Also Servicing : Bangalore

      No. 309, Block-A , Ganesh Maredian, Bairathi Colony, Indore - 452007

    5. Nikhil Technologies, Marathahalli

      1 Review 6.2 Sulekha Score
      +91 99002 82636
      Hadoop Training, Informatica training
      About

      Nikhil Technology is providing software Training, Corporate Training & Live Projects.

      Our mission is to offer comprehensive range of quality training dedicated to meet the requirement of every student, it’s easy… 
      Training providing with live project.

      Our Capabilities is a unique blend of subject matter expertise combined with the ability to deliver the real time scenarios training in various segments. 

      Our Structure is designed to enable & enhance our core competencies so that we can continue to meet and exceed student’s expectation. The structure is built around three areas Quality, Expertise and Deliverance.

    6. +91 80 43692296
      Hadoop Training, Business intelligence & analytics training
      About
      Willsys training program has only one objective – to transform you into the most sought-after global professional. More specifically, it aims to equip you with skill sets employers value.
    7. +91 80 43691745
      Hadoop Training, Business intelligence & analytics training
      About
      We are Providing 100% job oriented Practical , Realtime training Courses in Hadoop. We have an excellent track record of placing 90% of students in various MNC/startup companies.
    8. RJS Recruitment & Consulting, BTM Layout

      30 Reviews 6.6 Sulekha Score
      +91 80 48032368
      Hadoop Training, SAS training
      About
      RJS Technologies came into life in the year 2009 with an “inclination” attitude and intellects that we posses deliver to you above par expectations.
    9. +91 80 48031916
      Hadoop Training, SAS training
      About
      Innovator is an ideal education centre headquartered at Bangalore, India. The Main Activity of innovator is providing information Technology Training, Coaching classes, Consulting Services, Outsourcing, Which is delivered to its audience & Students all over the world via computer Based Training Programs and Company Owned learning centers. Innovator Institute’s most popular career programs are Innovator Certified Network Specialist (ICNS) & Innovator Certified Systems Specialist (ICSS). Innovator also offers globally accepted certifications from Microsoft, Red Hat, and Cisco etc. We offer industry internships in the form of On-Job-Training with stipend.
    10. Stansys Software Solutions, BTM Layout

      5 Reviews 6.3 Sulekha Score
      +91 80 48067240
      Hadoop Training, SAS training
      About
      Our training program is very unique and enables students to be functional and Productive in Hadoop. STANSYS was a success for our early batches and STANSYS will make you a success if you join us.
    • PREV
    • Page 1

    Recent Enquiries on Data Science & Business Analytics Training

    • Technology: Hadoop
    • Mode of training: Classroom
    • Training for: Organisation
    • Timing preferences: Weekdays (Evening)
    2 hours ago
    • Technology: Hadoop
    • Mode of training: Online
    • Training for: Individual
    • Timing preferences: Weekdays (Morning)
    Yesterday
    • Technology: Hadoop
    • Mode of training: Classroom
    • Training for: Individual
    • Timing preferences: Weekdays (Morning)
    2 days ago
    • Technology: Hadoop Spark
    • Mode of training: Classroom
    • Training for: Individual
    • Timing preferences: Weekdays (Morning)
    2 days ago

    Recent Bookings in Data Science & Business Analytics Training

    • Technology: Big Data
    • Mode of training: Online
    • Training for: Individual
    • Timing preferences: Weekdays (Morning)
    50 days ago
    • Big Data & Apache Hadoop Developer Training Highlights:

      1. Master the concepts of Hadoop Distributed File System and MapReduce framework
      2. Setup a Hadoop Cluster
      3. Understand Data Loading Techniques using Sqoop and Flume
      4. Program in MapReduce (Both MRv1 and MRv2)
      5. Learn to write Complex MapReduce programs
      6. Program in YARN (MRv2)
      7. Perform Data Analytics using Pig and Hive
      8. Implement HBase, MapReduce Integration, Advanced Usage and Advanced Indexing
      9. Have a good understanding of ZooKeeper service
      10. New features in Hadoop 2.0 -- YARN, HDFS Federation, NameNode High Availability
      11. Implement best Practices for Hadoop Development and Debugging
      12. Implement a Hadoop Project
      13. Work on a Real Life Project on Big Data Analytics and gain Hands on Project Experience

      1. Introduction: Apache Hadoop

      • Why Hadoop?
      • Core Hadoop Components
      • Fundamental Concepts

      2. Hadoop Installation and Initial Configuration

      • Deployment Types
      • Installing Hadoop
      • Specifying the Hadoop Configuration
      • Performing Initial HDFS Configuration
      • Performing Initial YARN and MapReduce Configuration
      • Hadoop Logging
      3. HDFS
      • HDFS Features
      • Writing and Reading Files
      • NameNode Memory Considerations
      • Overview of HDFS Security
      • Using the NameNode Web UI
      • Using the Hadoop File Shell

      4. Installing and Configuring Hive and Pig

      • Hive
      • Pig

      5. Managing and Scheduling Jobs

      • Managing Running Jobs
      • Scheduling Hadoop Jobs

      6. Getting Data into HDFS

      • Ingesting Data from External Sources with Flume
      • Ingesting Data from Relational Databases with Sqoop
      • Best Practices for Importing Data

      7. YARN and MapReduce

      • What Is MapReduce?
      • Basic MapReduce Concepts
      • YARN Cluster Architecture
      • Using the YARN Web UI
      • MapReduce Version 1

      8. Planning Your Hadoop Cluster

      • Configuring Nodes

      9. Advanced Cluster Configuration

      • Advanced Configuration Parameters
      • Configuring Hadoop Ports
      • Explicitly Including and Excluding Hosts
      • Configuring HDFS High Availability

      10. HA (High Availability mode in Hadoop)

      • What is HA
      • Importance of HA
      • Configuring HA in Hadoop
      • Demonstrating HA

      Fundamental: Introduction to BIG Data 

      11. Introduction to BIG Data

      • Introduction
      • BIG Data: Insight
      • What do we mean by BIG Data?
      • Understanding BIG Data: Summary
      • Few Examples of BIG Data
      • Why BIG data is a BUZZ?

      12. BIG Data Analytics and why it’s a Need Now?

      • What is BIG data Analytics?
      • Why BIG Data Analytics is a ‘need’ now?
      • BIG Data: The Solution
      • Implementing BIG Data Analytics – Different Approaches

      13. Traditional Analytics vs. BIG Data Analytics

      • The Traditional Approach: Business Requirement Drives Solution Design
      • The BIG Data Approach: Information Sources drive Creative Discovery
      • Traditional and BIG Data Approaches
      • BIG Data Complements Traditional Enterprise Data Warehouse
      • Traditional Analytics Platform v/s BIG Data Analytics Platform

      14. Real Time Case Studies

      • BIG Data Analytics – Use Cases
      • BIG Data to predict your Customer’s Behaviors
      • When to consider for BIG Data Solution?
      • BIG Data Real Time Case Study

      15. Technologies within BIG Data Eco System

      • BIG Data Landscape
      • BIG Data Key Components
      • Hadoop at a Glance
      • TF-IDF Formally Defined
      • Computing TF-IDF

      16. Calculating Word co- occurrences

      • Word Co-Occurrence: Motivation
      • Word Co-Occurrence: Algorithm Eco System: Integrating Hadoop into the Enterprise Workflow 

      17. Augmenting Enterprise Data Warehouse

      • Introduction
      • RDBMS Strengths
      • RDBMS Weaknesses
      • Typical RDBMS Scenario
      • OLAP Database Limitations
      • Using Hadoop to Augment Existing Databases
      • Benefits of Hadoop
      • Hadoop Tradeoffs

      18. Introduction, usage and Basic Syntax of Sqoop

      • Importing Data from an RDBMS to HDFS
      • Sqoop: SQL to Hadoop
      • Custom Sqoop Connectors
      • Sqoop : Basic Syntax
      • Connecting to a Database Server
      • Selecting the Data to Import
      • Free-form Query Imports
      • Examples of Sqoop
      • Sqoop: Other Options
      • Demonstration: Importing Data With Sqoop Eco System: Hadoop Eco System Projects

      19. HIVE

      • Hive & Pig: Motivation
      • Hive: Introduction
      • Hive: Features
      • The Hive Data Model
      • Hive Data Types
      • Timestamps data type
      • The Hive Metastore
      • Hive Data: Physical Layout
      • Hive Basics: Creating Table
      • Loading Data into Hive
      • Using Sqoop to import data into HIVE tables
      • Basic Select Queries
      • Joining Tables
      • Storing Output Results
      • Creating User-Defined Functions
      • Hive Limitations

      20. PIG

      • Pig: Introduction
      • Pig Latin
      • Pig Concepts
      • Pig Features
      • A Sample Pig Script
      • More PigLatin
      • More PigLatin: Grouping
      • More PigLatin: FOREACH
      • Pig Vs SQL

      21. Zookeeper

      • Configuring Zookeeper
      • About Zookeeper
      • Components in Zookeeper 22. Flume
      • Flume: Basics|Flume's high-level architecture
      • Flow in Flume |Flume: Features
      • Flume Agent Characteristics |Flume’s Design Goals: Reliability
      • Flume’s Design Goals: Scalability |Flume’s Design Goals: Manageability
      • Flume’s Design Goals: Extensibility | Flume: Usage Patterns

      Fundamentals: Introduction to Apache Hadoop and its Ecosystem 

      22. The Motivation for Hadoop

      • Traditional Large Scale Computation
      • Distributed Systems: Problems
      • Distributed Systems: Data Storage
      • The Data Driven World
      • Data Becomes the Bottleneck
      • Partial Failure Support
      • Data Recoverability
      • Component Recovery
      • Consistency
      • Scalability
      • Hadoop’s History
      • Core Hadoop Concepts
      • Hadoop Very High/Level Overview

      23. Hadoop: Concepts and Architecture

      • Hadoop Components
      • Hadoop Components: HDFS
      • Hadoop Components: MapReduce
      • HDFS Basic Concepts
      • How Files Are Stored?
      • How Files Are Stored. Example
      • More on the HDFS NameNode
      • HDFS: Points To Note
      • Accessing HDFS
      • Hadoopfs Examples
      • The Training Virtual Machine
      • Demonstration: Uploading Files and new data into HDFS
      • Demonstration: Exploring Hadoop Distributed File System
      • What is MapReduce? ? Features of MapReduce?
      • Giant Data: MapReduce and Hadoop
      • MapReduce: Automatically Distributed
      • MapReduce Framework
      • MapReduce: Map Phase
      • MapReduce Programming Example: Search Engine Schematic process of a map-reduce computation
      • The use of a combiner
      • MapReduce: The Big Picture
      • The Five Hadoop Daemons
      • Basic Cluster Combination
      • Submitting A job
      • MapReduce: The JobTracker
      • MapReduce: Terminology
      • MapReduce: Terminology – Speculative Execution
      • MapReduce: The Mapper
      • MapReduce: The Reducer
      • Example Reducer: Sum Reducer
      • Example Reducer: Identify Reducer
      • MapReduce Example: Word Count
      • MapReduce: Data Locality
      • MapReduce: Is Shuffle and Sort a Bottleneck?
      • MapReduce: Is a Slow Mapper a Bottleneck?
      • Demonstration: Running a MapReduce Job

      24. Hadoop and the Data Warehouse

      • Hadoop and the Data Warehouse
      • Hadoop Differentiators
      • Data Warehouse Differentiators
      • When and Where to Use Which

      25. Introducing Hadoop Eco system components

      • Other Ecosystem Projects: Introduction
      • Hive
      • Pig
      • Flume
      • Sqoop
      • Zookeeper
      • HBase Advance: Basic Programming with the Hadoop Core API 

      26. Writing MapReduce Program

      • A Sample MapReduce Program: Introduction
      • Map Reduce: List Processing
      • MapReduce Data Flow
      • The MapReduce Flow: Introduction
      • Basic MapReduce API Concepts
      • Putting Mapper & Reducer together in MapReduce
      • Our MapReduce Program: WordCount
      • Getting Data to the Mapper
      • Keys and Values are Objects
      • What is Writable Comparable?
      • Writing MapReduce application in Java
      • The Driver
      • The Driver: Complete Code
      • The Driver: Import Statements
      • The Driver: Main Code
      • The Driver Class: Main Method
      • Sanity Checking The Job’s Invocation
      • Configuring The Job With JobConf
      • Creating a New JobConf Object
      • Naming The Job
      • Specifying Input and Output Directories
      • Specifying the InputFormat
      • Determining Which Files To Read
      • Specifying Final Output With Output Format
      • Specify The Classes for Mapper and Reducer
      • Specify The Intermediate Data Types
      • Specify The Final Output Data Types
      • Running the Job
      • Reprise: Driver Code
      • The Mapper
      • The Mapper: Complete Code
      • The Mapper: import Statements
      • The Mapper: Main Code
      • The Map Method
      • The map Method: Processing The Line
      • Reprise: The Map Method
      • The Reducer
      • The Reducer: Complete Code
      • The Reducer: Import Statements
      • The Reducer: Main Code
      • The reduce Method
      • Processing The Values
      • Writing The Final Output
      • Reprise: The Reduce Method
      • Speeding up Hadoop development by using Eclipse
      • Integrated Development Environments
      • Using Eclipse
      • Demonstration: Writing a MapReduce program

      27. Introduction to Combiner

      • The Combiner
      • MapReduce Example: Word Count
      • Word Count with Combiner
      • Specifying a Combiner
      • Demonstration: Writing and Implementing a Combiner

      Advance: Problem Solving with MapReduce 

      28. Sorting & searching large data sets

      • Introduction
      • Sorting
      • Sorting as a Speed Test of Hadoop
      • Shuffle and Sort in MapReduce
      • Searching

      29. Performing a secondary sort

      • Secondary Sort: Motivation
      • Implementing the Secondary Sort
      • Secondary Sort: Example

      30. Indexing data and inverted Index

      • Indexing
      • Inverted Index Algorithm
      • Inverted Index: DataFlow
      • Aside: Word Count

      31. Term Frequency - Inverse Document Frequency (TF- IDF)

      • Term Frequency Inverse Document Frequency(TF-IDF)
      • TF-IDF: Motivation ? TF-IDF: Data Mining Example