610+ Hadoop Training in Bangalore

Tell us more about your requirements so that we can connect you to the right Hadoop Training in Bangalore
    • Bangalore
    Select your city
    • Big Data
    • Hadoop Spark
    • Hadoop
    • Hadoop Administration
    • Ansible
    • Decision Tree Modeling Using R
    • Comprehensive MapReduce
    • Comprehensive Hive
    • Comprehensive HBase
    • Apache Cassandra
    • Apache Ambari
    • Apache Hadoop
    • Apache HBase
    • Apache Mahout
    • Apache Pig
    • Apache Splunk
    • Apache Storm
    • Apache Kafka
    • Apache Solr
    • Apache Spark
    • Talend
    • Google Cloud Platform Big Data & Machine Learning Fundamentals
    • Graph Databases with Neo4j
    • Impala
    • Others
    Select technology
    • Classroom
    • Online
    Select mode of training
    • Individual
    • Organisation
    Select training for
    • No Preference
    • Weekdays (Morning)
    • Weekdays (Evening)
    • Weekends
    Select timing preferences
    Share your contact details to proceed
    Enter mobile number
    Enter your name
    Enter your email
    Time preference for calls
    • Any time
    • 9AM - 12PM
    • 12PM - 3PM
    • 3PM - 6PM
    • 6PM - 9PM
    Do you want to add more details about your requirement to send to the service providers? Add a comment for your requirement
    Enter additional information

    You have successfully posted your need

    You have successfully posted your requirement
    We're working to find matched experts for you. We'll SMS / email you the info shortly.

    × 1800-3000-3232 Feedback
    Help & Feedback

    Hadoop Training in Bangalore as on Oct 19, 2017

    Filter by: RESET
    • Type of training
    • Technology
      + more
      • - less
    • Mode of training
    • Locality
      + more
      • - less
    1. +91 80 48100596
      Hadoop Training, Business intelligence & analytics training

      Our dedicated Trainer/Consultants have extensive, successful corporate experience in successfully delivering effective business solutions.

    2. Gensoft IT India, Jayanagar

      55 Reviews 7.1 Sulekha Score
      +91 80 48067108
      Hadoop Training, Informatica training
      Hadoop Administration training from Gensoft IT India provides participants an expertise in all the steps necessary to operate and maintain a Hadoop cluster, i.e. from Planning, Installation and Configuration through load balancing, Security and Tuning.
    3. Victoria University-India

      1 Review 4.5 Sulekha Score
      +91 79 49072562
      Hadoop Training, Business intelligence & analytics training
      Also Servicing : Bangalore
      We currently offer three masters by coursework programs. These programs form a professional information technology management suite. You can take units across the different masters courses to support your interests and career aspirations.
    4. +91 80 48032601
      Hadoop Training, Big data & Hadoop training
      LW provides training by in-house experts or certified trainers or corporate developers for Hadoop. Topics include how to install, configure and manage a single and multi-node Hadoop cluster, configure and manage HDFS, write MapReduce jobs and work with many of the projects around Hadoop such as Pig, Hive, HBase, Sqoop, and Zookeeper.
    5. +91 80 43692296
      Hadoop Training, Business intelligence & analytics training
      Willsys training program has only one objective – to transform you into the most sought-after global professional. More specifically, it aims to equip you with skill sets employers value.
    6. +91 80 48032479
      Hadoop Training, Business intelligence & analytics training
      Eminent It Info is a IT training institute with enrich skilled faculties who are delivering lectures with theoretically and live scenarios. Learning a technology with correct guidance is more important in this competition world. You can speak with our career counselor for more information
    7. +91 80 43691745
      Hadoop Training, Business intelligence & analytics training
      We are Providing 100% job oriented Practical , Realtime training Courses in Hadoop. We have an excellent track record of placing 90% of students in various MNC/startup companies.
    8. RJS Recruitment & Consulting, BTM Layout

      26 Reviews 6.7 Sulekha Score
      +91 80 48032368
      Hadoop Training, SAS training
      RJS Technologies came into life in the year 2009 with an “inclination” attitude and intellects that we posses deliver to you above par expectations.
    9. Stansys Software Solutions, BTM Layout

      5 Reviews 5.3 Sulekha Score
      +91 80 48067240
      Hadoop Training, SAS training
      Our training program is very unique and enables students to be functional and Productive in Hadoop. STANSYS was a success for our early batches and STANSYS will make you a success if you join us.
    10. AMS Infotech L.L.P., Basavanagudi

      12 Reviews 7.0 Sulekha Score
      +91 80 48070503
      Hadoop Training, SAS training
      Thinking beyond possibilities! Having started operation in the year 2010, AMS INFOTECH LLP has diversified from its core group software and web development operations and diversified into related verticals and areas such as corporate training, mobile application development, integration, expert consultancy, staffing and technical recruiting among others.The expertise gained in specific areas have definitely facilitated us to gain leadership in other segments thereby reflecting on our product and professional service offerings. Our main focus and expertise are in business intelligence, advanced analytics, data mining, information and research applications, customer managements system (CMS), enterprise resource planning (ERP) to name a few. Apart from this, we offer placement consultancy services and online training on various courses. Contact us for more details. Mobile Number-8197734111Email Id- amsinfotechllp@gmail.com
    • PREV
    • Page 1

    10 Reviews of Hadoop Training in Bangalore as on Oct 19, 2017

    Average Rating (4)
    1. 30th September, 2017

      Good once, with proper facility .

    2. 26th September, 2017

      I had Plan to joined in this Institute for my Big data & Hadoop training, they provide the good service for me, the way of teaching very nice, they provided the update guidance, for this service they quoted 6500 for 60DAYS of coaching, overall the service rendered by them was good ...

    3. 26th September, 2017

      Joined this institute for Data science with R training for organisation. I go for weekend class that is on Saturday and Sundays. Timings are from 9 am to 12 pm. They qutoed me 25,000 for 3 month duration course. I am happy to join this institute.

    4. 22nd September, 2017

      I had a plan to join for my Big data & Hadoop training , for that purpose only I approached this Institute, got the good response from them than I expected the good service from them, they have done it what I'm expected from them, for this training I paid 20000 rupees for 2.5 months, overall I got the good service from this institute....

    5. 21st September, 2017

      Good training institute to learn Informatica MDM courses. It was 2 months training and fee paid was 15000 from this institute including project and hands on. Trainer was very supportive and cooperative. They has provided suitable batch timing based on availability. Overall I had very good experience.

    6. 20th September, 2017

      I had joined in this center.My duration of training is 2 months.The course fee which I paid was 20000 which was normal for overall.They are providing the good faculty is provided to teach us.We had a very good trainer, who has well experienced and explains each and every concepts very clear. I am glad to join in this institute.

    7. 18th September, 2017

      Doing my Tableau Analytics in this acadamy.fees amount I paid 10000 for this coaching.duration of the course 3 months.nice coaching and good tutors I got from this acadamy.i am really happy and satisfied with that service..

    8. 16th September, 2017

      had Plan to joined in this Institute for my Big data & Hadoop training, they provide the good service for me, the way of teaching very nice, they provided the update guidance, for this service they quoted 16000 for 2 months coaching, overall the service rendered by them was good ...

    9. 13th September, 2017

      For doing Big data & Hadoop training, I had joined in this center.Duration of training is 2 months.The course fee which I paid was 16000 which was normal for overall.They are providing the good faculty is provided to teach us.I am glad to join in this Consultancy.

    10. 31st August, 2017

      For those who are interested to start career with hadoop, Dvs Is best

    • PREV
    • Page 1

    Recent Enquiries on Data Science & Business Analytics Training

    • Technology: Big Data
    • Mode of training: Online
    • Training for: Organisation
    • Timing preferences: Weekdays (Evening)
    • Technology: Hadoop
    • Mode of training: Classroom
    • Training for: Organisation
    • Timing preferences: Weekends
    2 days ago
    • Technology: Hadoop Spark
    • Mode of training: Online
    • Training for: Individual
    • Timing preferences: Weekdays (Morning)
    3 days ago
    • Technology: Hadoop
    • Mode of training: Classroom
    • Training for: Individual
    • Timing preferences: Weekdays (Morning)
    3 days ago
    • Big Data & Apache Hadoop Developer Training Highlights:

      1. Master the concepts of Hadoop Distributed File System and MapReduce framework
      2. Setup a Hadoop Cluster
      3. Understand Data Loading Techniques using Sqoop and Flume
      4. Program in MapReduce (Both MRv1 and MRv2)
      5. Learn to write Complex MapReduce programs
      6. Program in YARN (MRv2)
      7. Perform Data Analytics using Pig and Hive
      8. Implement HBase, MapReduce Integration, Advanced Usage and Advanced Indexing
      9. Have a good understanding of ZooKeeper service
      10. New features in Hadoop 2.0 -- YARN, HDFS Federation, NameNode High Availability
      11. Implement best Practices for Hadoop Development and Debugging
      12. Implement a Hadoop Project
      13. Work on a Real Life Project on Big Data Analytics and gain Hands on Project Experience

      1. Introduction: Apache Hadoop

      • Why Hadoop?
      • Core Hadoop Components
      • Fundamental Concepts

      2. Hadoop Installation and Initial Configuration

      • Deployment Types
      • Installing Hadoop
      • Specifying the Hadoop Configuration
      • Performing Initial HDFS Configuration
      • Performing Initial YARN and MapReduce Configuration
      • Hadoop Logging
      3. HDFS
      • HDFS Features
      • Writing and Reading Files
      • NameNode Memory Considerations
      • Overview of HDFS Security
      • Using the NameNode Web UI
      • Using the Hadoop File Shell

      4. Installing and Configuring Hive and Pig

      • Hive
      • Pig

      5. Managing and Scheduling Jobs

      • Managing Running Jobs
      • Scheduling Hadoop Jobs

      6. Getting Data into HDFS

      • Ingesting Data from External Sources with Flume
      • Ingesting Data from Relational Databases with Sqoop
      • Best Practices for Importing Data

      7. YARN and MapReduce

      • What Is MapReduce?
      • Basic MapReduce Concepts
      • YARN Cluster Architecture
      • Using the YARN Web UI
      • MapReduce Version 1

      8. Planning Your Hadoop Cluster

      • Configuring Nodes

      9. Advanced Cluster Configuration

      • Advanced Configuration Parameters
      • Configuring Hadoop Ports
      • Explicitly Including and Excluding Hosts
      • Configuring HDFS High Availability

      10. HA (High Availability mode in Hadoop)

      • What is HA
      • Importance of HA
      • Configuring HA in Hadoop
      • Demonstrating HA

      Fundamental: Introduction to BIG Data 

      11. Introduction to BIG Data

      • Introduction
      • BIG Data: Insight
      • What do we mean by BIG Data?
      • Understanding BIG Data: Summary
      • Few Examples of BIG Data
      • Why BIG data is a BUZZ?

      12. BIG Data Analytics and why it’s a Need Now?

      • What is BIG data Analytics?
      • Why BIG Data Analytics is a ‘need’ now?
      • BIG Data: The Solution
      • Implementing BIG Data Analytics – Different Approaches

      13. Traditional Analytics vs. BIG Data Analytics

      • The Traditional Approach: Business Requirement Drives Solution Design
      • The BIG Data Approach: Information Sources drive Creative Discovery
      • Traditional and BIG Data Approaches
      • BIG Data Complements Traditional Enterprise Data Warehouse
      • Traditional Analytics Platform v/s BIG Data Analytics Platform

      14. Real Time Case Studies

      • BIG Data Analytics – Use Cases
      • BIG Data to predict your Customer’s Behaviors
      • When to consider for BIG Data Solution?
      • BIG Data Real Time Case Study

      15. Technologies within BIG Data Eco System

      • BIG Data Landscape
      • BIG Data Key Components
      • Hadoop at a Glance
      • TF-IDF Formally Defined
      • Computing TF-IDF

      16. Calculating Word co- occurrences

      • Word Co-Occurrence: Motivation
      • Word Co-Occurrence: Algorithm Eco System: Integrating Hadoop into the Enterprise Workflow 

      17. Augmenting Enterprise Data Warehouse

      • Introduction
      • RDBMS Strengths
      • RDBMS Weaknesses
      • Typical RDBMS Scenario
      • OLAP Database Limitations
      • Using Hadoop to Augment Existing Databases
      • Benefits of Hadoop
      • Hadoop Tradeoffs

      18. Introduction, usage and Basic Syntax of Sqoop

      • Importing Data from an RDBMS to HDFS
      • Sqoop: SQL to Hadoop
      • Custom Sqoop Connectors
      • Sqoop : Basic Syntax
      • Connecting to a Database Server
      • Selecting the Data to Import
      • Free-form Query Imports
      • Examples of Sqoop
      • Sqoop: Other Options
      • Demonstration: Importing Data With Sqoop Eco System: Hadoop Eco System Projects

      19. HIVE

      • Hive & Pig: Motivation
      • Hive: Introduction
      • Hive: Features
      • The Hive Data Model
      • Hive Data Types
      • Timestamps data type
      • The Hive Metastore
      • Hive Data: Physical Layout
      • Hive Basics: Creating Table
      • Loading Data into Hive
      • Using Sqoop to import data into HIVE tables
      • Basic Select Queries
      • Joining Tables
      • Storing Output Results
      • Creating User-Defined Functions
      • Hive Limitations

      20. PIG

      • Pig: Introduction
      • Pig Latin
      • Pig Concepts
      • Pig Features
      • A Sample Pig Script
      • More PigLatin
      • More PigLatin: Grouping
      • More PigLatin: FOREACH
      • Pig Vs SQL

      21. Zookeeper

      • Configuring Zookeeper
      • About Zookeeper
      • Components in Zookeeper 22. Flume
      • Flume: Basics|Flume's high-level architecture
      • Flow in Flume |Flume: Features
      • Flume Agent Characteristics |Flume’s Design Goals: Reliability
      • Flume’s Design Goals: Scalability |Flume’s Design Goals: Manageability
      • Flume’s Design Goals: Extensibility | Flume: Usage Patterns

      Fundamentals: Introduction to Apache Hadoop and its Ecosystem 

      22. The Motivation for Hadoop

      • Traditional Large Scale Computation
      • Distributed Systems: Problems
      • Distributed Systems: Data Storage
      • The Data Driven World
      • Data Becomes the Bottleneck
      • Partial Failure Support
      • Data Recoverability
      • Component Recovery
      • Consistency
      • Scalability
      • Hadoop’s History
      • Core Hadoop Concepts
      • Hadoop Very High/Level Overview

      23. Hadoop: Concepts and Architecture

      • Hadoop Components
      • Hadoop Components: HDFS
      • Hadoop Components: MapReduce
      • HDFS Basic Concepts
      • How Files Are Stored?
      • How Files Are Stored. Example
      • More on the HDFS NameNode
      • HDFS: Points To Note
      • Accessing HDFS
      • Hadoopfs Examples
      • The Training Virtual Machine
      • Demonstration: Uploading Files and new data into HDFS
      • Demonstration: Exploring Hadoop Distributed File System
      • What is MapReduce? ? Features of MapReduce?
      • Giant Data: MapReduce and Hadoop
      • MapReduce: Automatically Distributed
      • MapReduce Framework
      • MapReduce: Map Phase
      • MapReduce Programming Example: Search Engine Schematic process of a map-reduce computation
      • The use of a combiner
      • MapReduce: The Big Picture
      • The Five Hadoop Daemons
      • Basic Cluster Combination
      • Submitting A job
      • MapReduce: The JobTracker
      • MapReduce: Terminology
      • MapReduce: Terminology – Speculative Execution
      • MapReduce: The Mapper
      • MapReduce: The Reducer
      • Example Reducer: Sum Reducer
      • Example Reducer: Identify Reducer
      • MapReduce Example: Word Count
      • MapReduce: Data Locality
      • MapReduce: Is Shuffle and Sort a Bottleneck?
      • MapReduce: Is a Slow Mapper a Bottleneck?
      • Demonstration: Running a MapReduce Job

      24. Hadoop and the Data Warehouse

      • Hadoop and the Data Warehouse
      • Hadoop Differentiators
      • Data Warehouse Differentiators
      • When and Where to Use Which

      25. Introducing Hadoop Eco system components

      • Other Ecosystem Projects: Introduction
      • Hive
      • Pig
      • Flume
      • Sqoop
      • Zookeeper
      • HBase Advance: Basic Programming with the Hadoop Core API 

      26. Writing MapReduce Program

      • A Sample MapReduce Program: Introduction
      • Map Reduce: List Processing
      • MapReduce Data Flow
      • The MapReduce Flow: Introduction
      • Basic MapReduce API Concepts
      • Putting Mapper & Reducer together in MapReduce
      • Our MapReduce Program: WordCount
      • Getting Data to the Mapper
      • Keys and Values are Objects
      • What is Writable Comparable?
      • Writing MapReduce application in Java
      • The Driver
      • The Driver: Complete Code
      • The Driver: Import Statements
      • The Driver: Main Code
      • The Driver Class: Main Method
      • Sanity Checking The Job’s Invocation
      • Configuring The Job With JobConf
      • Creating a New JobConf Object
      • Naming The Job
      • Specifying Input and Output Directories
      • Specifying the InputFormat
      • Determining Which Files To Read
      • Specifying Final Output With Output Format
      • Specify The Classes for Mapper and Reducer
      • Specify The Intermediate Data Types
      • Specify The Final Output Data Types
      • Running the Job
      • Reprise: Driver Code
      • The Mapper
      • The Mapper: Complete Code
      • The Mapper: import Statements
      • The Mapper: Main Code
      • The Map Method
      • The map Method: Processing The Line
      • Reprise: The Map Method
      • The Reducer
      • The Reducer: Complete Code
      • The Reducer: Import Statements
      • The Reducer: Main Code
      • The reduce Method
      • Processing The Values
      • Writing The Final Output
      • Reprise: The Reduce Method
      • Speeding up Hadoop development by using Eclipse
      • Integrated Development Environments
      • Using Eclipse
      • Demonstration: Writing a MapReduce program

      27. Introduction to Combiner

      • The Combiner
      • MapReduce Example: Word Count
      • Word Count with Combiner
      • Specifying a Combiner
      • Demonstration: Writing and Implementing a Combiner

      Advance: Problem Solving with MapReduce 

      28. Sorting & searching large data sets

      • Introduction
      • Sorting
      • Sorting as a Speed Test of Hadoop
      • Shuffle and Sort in MapReduce
      • Searching

      29. Performing a secondary sort

      • Secondary Sort: Motivation
      • Implementing the Secondary Sort
      • Secondary Sort: Example

      30. Indexing data and inverted Index

      • Indexing
      • Inverted Index Algorithm
      • Inverted Index: DataFlow
      • Aside: Word Count

      31. Term Frequency - Inverse Document Frequency (TF- IDF)

      • Term Frequency Inverse Document Frequency(TF-IDF)
      • TF-IDF: Motivation ? TF-IDF: Data Mining Example