Hadoop Training and Certification in Bangalore | Sulekha

1270+ Hadoop Training in Bangalore

Tell us more about your requirements so that we can connect you to the right Hadoop Training in Bangalore
    Bangalore
    • Bangalore
    Select your city
    • Big Data
    • Hadoop
    • Hadoop Spark
    • Hadoop Administration
    • Ansible
    • Google Cloud Platform Big Data & Machine Learning Fundamentals
    • Apache Spark
    • Apache Cassandra
    • Comprehensive HBase
    • Apache Hadoop
    • Decision Tree Modeling Using R
    • Apache Ambari
    • Apache Kafka
    • Talend
    • Comprehensive Hive
    • Comprehensive MapReduce
    • Apache Splunk
    • Impala
    • Apache Pig
    • Apache HBase
    • Graph Databases with Neo4j
    • Apache Storm
    • Apache Solr
    • Apache Mahout
    • Others
    Select technology
    • Classroom
    • Online
    Select mode of training
    • Individual
    • Organisation
    Select training for
    • No Preference
    • Weekdays (Morning)
    • Weekdays (Evening)
    • Weekends
    Select timing preferences
    Share your contact details to proceed
    +91
    Enter mobile number
    Enter your name
    Enter your email
    Time preference for calls
    • Any time
    • 3PM - 6PM
    • 6PM - 9PM
    Do you want to add more details about your requirement to send to the service providers? Add a comment for your requirement
    Enter additional information

    You have successfully posted your need

    You have successfully posted your requirement
    We're working to find matched experts for you. We'll SMS / email you the info shortly.

    × 1800-3000-3232 Feedback
    Help & Feedback

    Hadoop Training in Bangalore as on Feb 24, 2018

    1. DVS Technologies, Marathahalli Colony

      136 Reviews 8.6 Sulekha Score
      +91 80 48066750
      Hadoop Training, Big data & Hadoop training
      About
      We leverage our state-of-the art training room facilities and more than a decade of domain experience and expertise to provide niche technology training services. We have trained and certified over 1000 IT professionals in Big data Technologies like Hadoop, Map Reduce, Hive, Pig, Hbase, Sqoop, Flume, Oozie, Zookeeper, Spark etc.,
    2. +91 80 48100596
      Hadoop Training, Big data & Hadoop training
      About

      Our dedicated Trainer/Consultants have extensive, successful corporate experience in successfully delivering effective business solutions.

    3. UReach Solutions, Koramangala

      7 Reviews 6.3 Sulekha Score
      +91 80 61344749
      Hadoop Training, SAS training

      2nd Floor ,No. 4/3, Koramangala, Bangalore - 560029 Get Directions

    4. +91 80 43691648
      Hadoop Training, Informatica training

      No. 4, 2nd Floor, Sathyam Arcade, 1st Phase, BTM Layout 2nd stage, BTM Layout, Bangalore - 560076 Get Directions

    5. Sgraph Infotech, Marathahalli

      53 Reviews 8.7 Sulekha Score
      +91 80 43691201
      Hadoop Training, Informatica training

      3rd Floor, Outer Ring Road, Marathahalli, Bangalore - 560037

    6. Bigdatahub Org, Bellandur

      5.9 Sulekha Score
      +91 80 43691676
      Hadoop Training, Big data & Hadoop training

      Muthanna Gupta Building ,No. #19, 3rd Floor, Outer Ring Road, Bellandur, Bangalore - 560103 Get Directions

    7. Nikhil Technologies, Marathahalli

      1 Review 5.9 Sulekha Score
      +91 80 48031744
      Hadoop Training, Informatica training

      No.3, Ground Floor, VRKH Building, Vivekananda Layout, Marathahalli, Bangalore - 560037 Get Directions

    8. +91 80 43692296
      Hadoop Training, Business intelligence & analytics training
      About
      Willsys training program has only one objective – to transform you into the most sought-after global professional. More specifically, it aims to equip you with skill sets employers value.
    9. Byway Pro, Bellandur

      2 Reviews 6.5 Sulekha Score
      +91 80 48070431
      Hadoop Training, Business intelligence & analytics training

      2nd Floor, 32/16 Somashekar Reddy bldg, 5th Cross, Bellandur Main Road, Marathahalli, Bellandur, Bangalore - 560103 Get Directions

    10. Inventateq, BTM 2nd Stage

      67 Reviews 8.2 Sulekha Score
      +91 80 48100630
      Hadoop Training, Informatica training

      No. 687, 1st Floor, 29th Main Road, BTM Lake Road, BTM 2nd Stage, Bangalore - 560068 Get Directions

    • PREV
    • Page 1

    Recent Enquiries on Data Science & Business Analytics Training

    • Technology: Hadoop Administration
    • Mode of training: Classroom
    • Training for: Individual
    • Timing preferences: Weekdays (Morning)
    2 days ago
    • Technology: Hadoop
    • Mode of training: Classroom
    • Training for: Individual
    • Timing preferences: No Preference
    2 days ago
    • Technology: Big Data
    • Mode of training: Classroom
    • Training for: Individual
    • Timing preferences: No Preference
    4 days ago
    • Technology: Big Data
    • Mode of training: Classroom
    • Training for: Individual
    • Timing preferences: No Preference
    7 days ago

    Recent Bookings in Data Science & Business Analytics Training

    • Technology: Hadoop
    • Mode of training: Classroom
    • Training for: Individual
    • Timing preferences: No Preference
    2 days ago
    • Technology: Big Data
    • Mode of training: Classroom
    • Training for: Individual
    • Timing preferences: No Preference
    7 days ago
    • Technology: Hadoop Administration
    • Mode of training: Online
    • Training for: Individual
    • Timing preferences: Weekdays (Evening)
    12 days ago
    • Technology: Big Data
    • Mode of training: Online
    • Training for: Individual
    • Timing preferences: Weekdays (Morning)
    122 days ago
    • Big Data & Apache Hadoop Developer Training Highlights:

      1. Master the concepts of Hadoop Distributed File System and MapReduce framework
      2. Setup a Hadoop Cluster
      3. Understand Data Loading Techniques using Sqoop and Flume
      4. Program in MapReduce (Both MRv1 and MRv2)
      5. Learn to write Complex MapReduce programs
      6. Program in YARN (MRv2)
      7. Perform Data Analytics using Pig and Hive
      8. Implement HBase, MapReduce Integration, Advanced Usage and Advanced Indexing
      9. Have a good understanding of ZooKeeper service
      10. New features in Hadoop 2.0 -- YARN, HDFS Federation, NameNode High Availability
      11. Implement best Practices for Hadoop Development and Debugging
      12. Implement a Hadoop Project
      13. Work on a Real Life Project on Big Data Analytics and gain Hands on Project Experience

      1. Introduction: Apache Hadoop

      • Why Hadoop?
      • Core Hadoop Components
      • Fundamental Concepts

      2. Hadoop Installation and Initial Configuration

      • Deployment Types
      • Installing Hadoop
      • Specifying the Hadoop Configuration
      • Performing Initial HDFS Configuration
      • Performing Initial YARN and MapReduce Configuration
      • Hadoop Logging
      3. HDFS
      • HDFS Features
      • Writing and Reading Files
      • NameNode Memory Considerations
      • Overview of HDFS Security
      • Using the NameNode Web UI
      • Using the Hadoop File Shell

      4. Installing and Configuring Hive and Pig

      • Hive
      • Pig

      5. Managing and Scheduling Jobs

      • Managing Running Jobs
      • Scheduling Hadoop Jobs

      6. Getting Data into HDFS

      • Ingesting Data from External Sources with Flume
      • Ingesting Data from Relational Databases with Sqoop
      • Best Practices for Importing Data

      7. YARN and MapReduce

      • What Is MapReduce?
      • Basic MapReduce Concepts
      • YARN Cluster Architecture
      • Using the YARN Web UI
      • MapReduce Version 1

      8. Planning Your Hadoop Cluster

      • Configuring Nodes

      9. Advanced Cluster Configuration

      • Advanced Configuration Parameters
      • Configuring Hadoop Ports
      • Explicitly Including and Excluding Hosts
      • Configuring HDFS High Availability

      10. HA (High Availability mode in Hadoop)

      • What is HA
      • Importance of HA
      • Configuring HA in Hadoop
      • Demonstrating HA

      Fundamental: Introduction to BIG Data 

      11. Introduction to BIG Data

      • Introduction
      • BIG Data: Insight
      • What do we mean by BIG Data?
      • Understanding BIG Data: Summary
      • Few Examples of BIG Data
      • Why BIG data is a BUZZ?

      12. BIG Data Analytics and why it’s a Need Now?

      • What is BIG data Analytics?
      • Why BIG Data Analytics is a ‘need’ now?
      • BIG Data: The Solution
      • Implementing BIG Data Analytics – Different Approaches

      13. Traditional Analytics vs. BIG Data Analytics

      • The Traditional Approach: Business Requirement Drives Solution Design
      • The BIG Data Approach: Information Sources drive Creative Discovery
      • Traditional and BIG Data Approaches
      • BIG Data Complements Traditional Enterprise Data Warehouse
      • Traditional Analytics Platform v/s BIG Data Analytics Platform

      14. Real Time Case Studies

      • BIG Data Analytics – Use Cases
      • BIG Data to predict your Customer’s Behaviors
      • When to consider for BIG Data Solution?
      • BIG Data Real Time Case Study

      15. Technologies within BIG Data Eco System

      • BIG Data Landscape
      • BIG Data Key Components
      • Hadoop at a Glance
      • TF-IDF Formally Defined
      • Computing TF-IDF

      16. Calculating Word co- occurrences

      • Word Co-Occurrence: Motivation
      • Word Co-Occurrence: Algorithm Eco System: Integrating Hadoop into the Enterprise Workflow 

      17. Augmenting Enterprise Data Warehouse

      • Introduction
      • RDBMS Strengths
      • RDBMS Weaknesses
      • Typical RDBMS Scenario
      • OLAP Database Limitations
      • Using Hadoop to Augment Existing Databases
      • Benefits of Hadoop
      • Hadoop Tradeoffs

      18. Introduction, usage and Basic Syntax of Sqoop

      • Importing Data from an RDBMS to HDFS
      • Sqoop: SQL to Hadoop
      • Custom Sqoop Connectors
      • Sqoop : Basic Syntax
      • Connecting to a Database Server
      • Selecting the Data to Import
      • Free-form Query Imports
      • Examples of Sqoop
      • Sqoop: Other Options
      • Demonstration: Importing Data With Sqoop Eco System: Hadoop Eco System Projects

      19. HIVE

      • Hive & Pig: Motivation
      • Hive: Introduction
      • Hive: Features
      • The Hive Data Model
      • Hive Data Types
      • Timestamps data type
      • The Hive Metastore
      • Hive Data: Physical Layout
      • Hive Basics: Creating Table
      • Loading Data into Hive
      • Using Sqoop to import data into HIVE tables
      • Basic Select Queries
      • Joining Tables
      • Storing Output Results
      • Creating User-Defined Functions
      • Hive Limitations

      20. PIG

      • Pig: Introduction
      • Pig Latin
      • Pig Concepts
      • Pig Features
      • A Sample Pig Script
      • More PigLatin
      • More PigLatin: Grouping
      • More PigLatin: FOREACH
      • Pig Vs SQL

      21. Zookeeper

      • Configuring Zookeeper
      • About Zookeeper
      • Components in Zookeeper 22. Flume
      • Flume: Basics|Flume's high-level architecture
      • Flow in Flume |Flume: Features
      • Flume Agent Characteristics |Flume’s Design Goals: Reliability
      • Flume’s Design Goals: Scalability |Flume’s Design Goals: Manageability
      • Flume’s Design Goals: Extensibility | Flume: Usage Patterns

      Fundamentals: Introduction to Apache Hadoop and its Ecosystem 

      22. The Motivation for Hadoop

      • Traditional Large Scale Computation
      • Distributed Systems: Problems
      • Distributed Systems: Data Storage
      • The Data Driven World
      • Data Becomes the Bottleneck
      • Partial Failure Support
      • Data Recoverability
      • Component Recovery
      • Consistency
      • Scalability
      • Hadoop’s History
      • Core Hadoop Concepts
      • Hadoop Very High/Level Overview

      23. Hadoop: Concepts and Architecture

      • Hadoop Components
      • Hadoop Components: HDFS
      • Hadoop Components: MapReduce
      • HDFS Basic Concepts
      • How Files Are Stored?
      • How Files Are Stored. Example
      • More on the HDFS NameNode
      • HDFS: Points To Note
      • Accessing HDFS
      • Hadoopfs Examples
      • The Training Virtual Machine
      • Demonstration: Uploading Files and new data into HDFS
      • Demonstration: Exploring Hadoop Distributed File System
      • What is MapReduce? ? Features of MapReduce?
      • Giant Data: MapReduce and Hadoop
      • MapReduce: Automatically Distributed
      • MapReduce Framework
      • MapReduce: Map Phase
      • MapReduce Programming Example: Search Engine Schematic process of a map-reduce computation
      • The use of a combiner
      • MapReduce: The Big Picture
      • The Five Hadoop Daemons
      • Basic Cluster Combination
      • Submitting A job
      • MapReduce: The JobTracker
      • MapReduce: Terminology
      • MapReduce: Terminology – Speculative Execution
      • MapReduce: The Mapper
      • MapReduce: The Reducer
      • Example Reducer: Sum Reducer
      • Example Reducer: Identify Reducer
      • MapReduce Example: Word Count
      • MapReduce: Data Locality
      • MapReduce: Is Shuffle and Sort a Bottleneck?
      • MapReduce: Is a Slow Mapper a Bottleneck?
      • Demonstration: Running a MapReduce Job

      24. Hadoop and the Data Warehouse

      • Hadoop and the Data Warehouse
      • Hadoop Differentiators
      • Data Warehouse Differentiators
      • When and Where to Use Which

      25. Introducing Hadoop Eco system components

      • Other Ecosystem Projects: Introduction
      • Hive
      • Pig
      • Flume
      • Sqoop
      • Zookeeper
      • HBase Advance: Basic Programming with the Hadoop Core API 

      26. Writing MapReduce Program

      • A Sample MapReduce Program: Introduction
      • Map Reduce: List Processing
      • MapReduce Data Flow
      • The MapReduce Flow: Introduction
      • Basic MapReduce API Concepts
      • Putting Mapper & Reducer together in MapReduce
      • Our MapReduce Program: WordCount
      • Getting Data to the Mapper
      • Keys and Values are Objects
      • What is Writable Comparable?
      • Writing MapReduce application in Java
      • The Driver
      • The Driver: Complete Code
      • The Driver: Import Statements
      • The Driver: Main Code
      • The Driver Class: Main Method
      • Sanity Checking The Job’s Invocation
      • Configuring The Job With JobConf
      • Creating a New JobConf Object
      • Naming The Job
      • Specifying Input and Output Directories
      • Specifying the InputFormat
      • Determining Which Files To Read
      • Specifying Final Output With Output Format
      • Specify The Classes for Mapper and Reducer
      • Specify The Intermediate Data Types
      • Specify The Final Output Data Types
      • Running the Job
      • Reprise: Driver Code
      • The Mapper
      • The Mapper: Complete Code
      • The Mapper: import Statements
      • The Mapper: Main Code
      • The Map Method
      • The map Method: Processing The Line
      • Reprise: The Map Method
      • The Reducer
      • The Reducer: Complete Code
      • The Reducer: Import Statements
      • The Reducer: Main Code
      • The reduce Method
      • Processing The Values
      • Writing The Final Output
      • Reprise: The Reduce Method
      • Speeding up Hadoop development by using Eclipse
      • Integrated Development Environments
      • Using Eclipse
      • Demonstration: Writing a MapReduce program

      27. Introduction to Combiner

      • The Combiner
      • MapReduce Example: Word Count
      • Word Count with Combiner
      • Specifying a Combiner
      • Demonstration: Writing and Implementing a Combiner

      Advance: Problem Solving with MapReduce 

      28. Sorting & searching large data sets

      • Introduction
      • Sorting
      • Sorting as a Speed Test of Hadoop
      • Shuffle and Sort in MapReduce
      • Searching

      29. Performing a secondary sort

      • Secondary Sort: Motivation
      • Implementing the Secondary Sort
      • Secondary Sort: Example

      30. Indexing data and inverted Index

      • Indexing
      • Inverted Index Algorithm
      • Inverted Index: DataFlow
      • Aside: Word Count

      31. Term Frequency - Inverse Document Frequency (TF- IDF)

      • Term Frequency Inverse Document Frequency(TF-IDF)
      • TF-IDF: Motivation ? TF-IDF: Data Mining Example