Sunday 26 May 2019

Google Cloud Platform(GCP) : Part- 3

Interacting with GCP

There are four ways we can interact with Google Cloud Platform.

  • Console: The GCP Console is a web-based administrative interface. It lets us view and manage all the projects and all the resources they use. It also lets us enable, disable and explore the APIs of GCP services. And it gives us access to Cloud Shell. The GCP Console also includes a tool called the APIs Explorer that helps to learn about the APIs interactively. It let's us see what APIs are available and in what versions. These APIs expect parameters and documentation on them is built in.
  • SDK and Cloud Shell: Cloud Shell is a command-line interface to GCP that's easily accessed from the browser. From Cloud Shell, we can use the tools provided by the Google Cloud Software Development kit SDK without having to first install them somewhere. The Google Cloud SDK is a set of tools that we can use to manage our resources and applications on GCP. These include the gcloud tool which provides the main command line interface for Google Cloud Platform products and services. There's also gsutil which is for Google Cloud Storage and bq which is for BigQuery. The easiest way to get to the SDK commands is to click the Cloud Shell button on a GCP Console. We can also install the SDK on our own computers, our on-premise servers of virtual machines and other clouds. The SDK is also available as a docker image.
  • Mobile App
  • APIs: The services that make up GCP offer Restful application programming interfaces so that the code we write can control them. The GCP Console lets us turn on and off APIs. Many APIs are off by default, and many are associated with quotas and limits. These restrictions help protect us from using resources inadvertently. We can enable only those APIs we need and we can request increases in quotas when we need more resources.

Cloud Launcher

Google Cloud Launcher is a tool for quickly deploying functional software packages on Google Cloud platform. GCP updates the base images for the software packages to fix critical issues and vulnerabilities, but it doesn't update the software after it's been deployed.

Virtual Private Cloud(VPC)

Virtual machines  have the power in generality of a full-fledged operating system in each system. We can segment our networks, use firewall rules to restrict access to instances, and create static routes to forward traffic to specific destinations. Virtual Private Cloud networks that we define have global scope. We can dynamically increase the size of a subnet in a custom network by expanding the range of IP addresses allocated to it. 
Features of VPCs are:

  • VPCs have routing tables. These areused to forward traffic from one instance to another instance within the same network. Even across sub-networks and even between GCP zones without requiring an external IP address.
  • VPCs give us a global distributed firewall. We can control to restrict access to instances both incoming and outgoing traffic.
  • Cloud Load Balancing is a fully distributed software defined managed service for all our traffic. With Cloud Load Balancing, a single anycast IP front ends all our backend instances in regions around the world. It provides cross region load balancing, including automatic multi region failover, which gently moves traffic in fractions if backends become unhealthy. Cloud Load Balancing reacts quickly to changes in users, traffic, backend health, network conditions, and other related conditions.
  • Cloud DNS is a managed DNS service running on the same infrastructure as Google. It has low latency and high availability and it's a cost effective way to make our applications and services available to our users. The DNS information we publish is served from redundant locations around the world. Cloud DNS is also programmable. We can publish and manage millions of DNS zones and records using the GCP console, the command line interface or the API. Google has a global system of edgecaches. We can use this system to accelerate content delivery in your application using Google Cloud CDN.
  • Cloud router lets our other networks and our Google VPC exchange route information over the VPN using the Border Gateway Protocol.
  • Peering means putting a router in the same public data center as a Google point of presence and exchanging traffic. One downside of peering though is that it isn't covered by a Google service level agreement. Customers who want the highest uptimes for their interconnection with Google should use dedicated interconnect in which customers get one more direct private connections to Google. If these connections have topologies that meet Google's specifications, they can be covered by up to a 99.99 percent SLA.

Compute Engine

Computer engine lets us create and run virtual machines on Google infrastructure. We can create a virtual machine instance by using the Google cloud platform console or the gcloud command line tool. Once our VMs are running, it's easy to take a durable snapshot of their discs. We can keep these as backups or use them when we need to migrate a VM to another region. A preemptible VM is different from an ordinary compute engine VM in only one respect. We've given compute engine permission to terminate it if its resources are needed elsewhere. We can save a lot of money with preemptible VMs. Compute engine has a feature called auto scaling that lets us add and take away VMs from our application based on load metrics.

Don't limit your challenges. Challenge your limits..

Sunday 14 April 2019

Hadoop : Part - 5


Security in Hadoop

Apache Hadoop achieves security by using Kerberos.
At a high level, there are three steps that a client must take to access a service when using Kerberos.

  • Authentication – The client authenticates itself to the authentication server. Then, receives a timestamped Ticket-Granting Ticket (TGT).
  • Authorization – The client uses the TGT to request a service ticket from the Ticket Granting Server.
  • Service Request – The client uses the service ticket to authenticate itself to the server.

Concurrent writes in HDFS

Multiple clients cannot write into an HDFS file at same time. Apache Hadoop HDFS follows single writer multiple reader models. The client which opens a file for writing, the NameNode grant a lease. Now suppose, some other client wants to write into that file. It asks NameNode for the write operation. NameNode first checks whether it has granted the lease for writing into that file to someone else or not. When someone already acquires the lease, then, it will reject the write request of the other client.

fsck

fsck is the File System Check. HDFS use the fsck (filesystem check) command to check for various inconsistencies. It also reports the problems like missing blocks for a file or under-replicated blocks. NameNode automatically corrects most of the recoverable failures. Filesystem check can run on the whole file system or on a subset of files.

Datanode failures

NameNode periodically receives a Heartbeat and a Blockreport from each of the DataNodes in the cluster. Receipt of a Heartbeat implies that the DataNode is functioning properly. A Blockreport contains a list of all blocks on a DataNode. When NameNode notices that it has not recieved a hearbeat message from a data node after a certain amount of time, the data node is marked as dead. Since blocks will be under replicated the system begins replicating the blocks that were stored on the dead datanode. The NameNode Orchestrates the replication of data blocks from one datanode to another. The replication data transfer happens directly between datanodes and the data never passes through the namenode.

Taskinstances

Task instances are the actual MapReduce jobs which are run on each slave node. Each Task Instance runs on its own JVM process. There can be multiple processes of task instance running on a slave node.

Communication to HDFS

The Client communication to HDFS happens using Hadoop HDFS API. Client applications talk to the NameNode whenever they wish to locate a file. The NameNode responds the successful requests by returning a list of relevant DataNode servers where the data lives. Client applications can talk directly to a DataNode, once the NameNode has provided the location of the data.

HDFS block and Inputsplit

Block is the physical representation of data while split is the logical representation of data present in the block.

Hadoop federation

HDFS Federation enhances an existing HDFS architecture. Hadoop Federation uses many independent Namenode/namespaces to scale the name service horizontally. It separates the namespace layer and the storage layer. Hence HDFS federation provides Isolation, Scalability and simple design.



Don't Give Up. The beginning is always the hardest but life rewards those who work hard for it.

Saturday 6 April 2019

Hadoop : Part - 4


Speculative Execution

Instead of identifying and fixing the slow-running tasks, Hadoop tries to detect when the task runs slower than expected and then launches other equivalent task as backup. This backup mechanism in Hadoop is Speculative Execution.

Heartbeat in HDFS

A heartbeat is a signal indicating that it is alive. A datanode sends heartbeat to Namenode.

Hadoop archives

Hadoop Archives (HAR) offers an effective way to deal with the small files problem.
Hadoop Archives or HAR is an archiving facility that packs files in to HDFS blocks efficiently and hence HAR can be used to tackle the small files problem in Hadoop.
hadoop archive -archiveName myhar.har /input/location /output/location
Once a .har file is created, you can do a listing on the .har file and you will see it is made up of index files and part files. Part files are nothing but the original files concatenated together in to a big file. Index files are look up files which is used to look up the individual small files inside the big part files.
hadoop fs -ls /output/location/myhar.har
/output/location/myhar.har/_index
/output/location/myhar.har/_masterindex
/output/location/myhar.har/part-000000

Reason for setting HDFS blocksize as 128MB

The block size is the smallest unit of data that a file system can store. If the blocksize is smaller it requires multiple lookups on namenode to locate the file. HDFS is meant to handle large files. If the blocksize is 128MB, then the number of requests goes down, greatly reducing the cost of overhead and load on the Name Node.

Data Locality in Hadoop

Data locality refers to the ability to move the computation close to where the actual data resides on the node, instead of moving large data to computation. This minimizes network congestion and increases the overall throughput of the system.

Safemode in Hadoop

Safemode in Apache Hadoop is a maintenance state of NameNode. During which NameNode doesn’t allow any modifications to the file system. During Safemode, HDFS cluster is in read-only and doesn’t replicate or delete blocks.

Single Point of Failure

In Hadoop 1.0, NameNode is a single point of Failure (SPOF). If namenode fails, all clients would unable to read/write files.
Hadoop 2.0 overcomes this SPOF by providing support for multiple NameNode. This feature provides  If active NameNode fails, then Standby-Namenode takes all the responsibility of active node.
some deployment requires high degree fault-tolerance. So new version 3.0 enable this feature by allowing the user to run multiple standby namenode.



Strive for excellence and success will follow you.. 


Wednesday 27 March 2019

Hadoop : Part - 3


Checkpoint node

Checkpoint Node keeps track of the latest checkpoint in a directory that has same structure as that of NameNode’s directory. Checkpoint node creates checkpoints for the namespace at regular intervals by downloading the edits and fsimage file from the NameNode and merging it locally.

Backup node

It maintains its up-to-date in-memory copy of the file system namespace that is in sync with the active NameNode.

Overwriting replication factor in HDFS

The replication factor in HDFS can be modified or overwritten in 2 ways-
  • Using the Hadoop FS Shell, replication factor can be changed per file basis using the below command-$hadoop fs –setrep –w 2 /my/test_file (test_file is the filename whose replication factor will be set to 2)
  • Using the Hadoop FS Shell, replication factor of all files under a given directory can be modified using the below command- $hadoop fs –setrep –w 5 /my/test_dir (test_dir is the name of the directory and all the files in this directory will have a replication factor set to 5

Edge nodes

Edges nodes or gateway nodes are the interface between hadoop cluster and the external network. Edge nodes are used for running cluster adminstration tools and client applications.

InputFormats in Hadoop

  • TextInputFormat
  • Key Value Input Format
  • Sequence File Input Format

Rack

It is the collection of machines around 40-50. All these machines are connected using the same network switch and if that network goes down then all machines in that rack will be out of service. Thus we say rack is down.

Rack awareness

The physical location of the data nodes is referred to as Rack in HDFS. The rack id of each data node is acquired by the NameNode. The process of selecting closer data nodes depending on the rack information is known as Rack Awareness.

Replica Placement Policy

The contents present in the file are divided into data block. After consulting with the NameNode, client allocates 3 data nodes for each data block. For each data block, there exists 2 copies in one rack and the third copy is present in another rack. This is generally referred to as the Replica Placement Policy.


In the middle of difficulty lies opportunity.. 

Sunday 24 March 2019

Google Cloud Platform(GCP) : Part- 2

Multi layered security approach

Google  also  designs  custom  chips, including  a  hardware security  chip  called  Titan  that's currently  being  deployed on  both servers  and  peripherals.  Google server machines  use  cryptographic signatures  to  make  sure  they  are  booting  the  correct  software.  Google designs and  builds its  own  data centers  which  incorporate  multiple layers  of  physical  security protections.
Google's infrastructure  provides  cryptographic privacy  and  integrity  for  remote  procedure called data-on-the-network, which  is  how  Google  services  communicate  with  each  other.  The infrastructure automatically  encrypts  our  PC traffic in  transit between data  centers. Google  central identity  service  which  usually  manifests  to  end  users  as the  Google log  in page  goes  beyond  asking  for  a simple username  and  password. It  also  intelligently  challenges  users  for  additional information  based  on risk factors  such  as  whether they  have  logged  in from  the same  device  or a  similar location  in  the past. Users can  also  use  second  factors  when  signing  in, including  devices  based  on  the  universal  second factor  U2F  open  standard.
Google  services  that  want  to  make  themselves  available  on  the Internet  register  themselves with an infrastructure service  called  the  Google  front  end(GFE),  which  checks  incoming  network connections for correct  certificates  and  best  practices. The  GFE also  additionally,  applies  protections against  denial  of service  attacks.  The  scale  of its  infrastructure,  enables Google  to  simply  absorb  many  denial of service  attacks, even behind  the  GFEs.  Google  also  has  multi-tier,  multi-layer  denial of service protections  that  further reduce  the risk  of any  denial  of service  impact. Inside  Google's infrastructure, machine  intelligence  and  rules warn  of possible incidents. Google  conducts  Red  Team  exercises simulated  attacks  to  improve  the effectiveness  of it's  responses.
The principle of Least Privilege says that  each  user should  have  only  those  privileges needed to  do  their  jobs.  In  a  least privilege  environment, people are  protected  from  an  entire class  of  errors.
GCP customers use  IAM(Identity and Access Management) to  implement  least  privilege,  and  it  makes  everybody  happier.  There  are  four ways  to interact  with GCP's  management layer:

  • Web-based console
  • SDK
  • Command-line  tools
  • APIs
  • Mobile  app

GCP Resource Hierarchy

All the  resources we  use,  whether  they're  virtual machines,  cloud  storage  buckets, tables  and  big  query  or anything  else in  GCP  are  organized into  projects. Optionally,  these  projects  may  be  organized into  folders. Folders  can contain  other  folders. All the  folders  and  projects  used  by  our  organization  can  be  brought  together under  an  organization  node. Project  folders  and  organization  nodes are  all places where  the  policies  can be  defined.
All Google  Cloud  platform  resources belong  to  a  project.  Projects are  the basis for  enabling  and  using  GCP  services  like  managing  APIs, enabling  billing  and  adding  and  removing  collaborators and  enabling  other  Google  services. Each project is a separate  compartment  and  each  resource  belongs  to  exactly  one.  Projects  can  have  different owners and  users, they're  built  separately  and  they're  managed separately. Each  GCP  project  has  a name  and  a  project  ID  that  we  assign. The  project  ID  is a  permanent  unchangeable  identifier  and  it  has to  be  unique  across GCP.  We use  project  IDs in several contexts  to  tell GCP  which  project  we want to work  with.  On the  other  hand, project  names  are  for  our convenience  and  we can  assign  them. GCP also  assigns each  of  our projects  a unique  project  number.
Folders let  teams have  the  ability  to  delegate  administrative  rights,  so  they  can work  independently. The  resources  in  a  folder  inherit  IAM policies  from  the  folder. Organisation node is the  top  of  the  resource hierarchy.  There  are  some  special  roles associated  with  it.

Identity and Access Management(IAM)

IAM  lets administrators authorize  who  can  take  action  on  specific  resources. An  IAM  policy  has:
  • A who part
  • A  can  do 
  • What part
  • An on  which  resource  part
The  who  part names  the  user  or users.  The  who  part  of  an IAM  policy  can  be  defined either  by  a Google  account,  a Google  group, a Service  account,  an entire  G Suite,  or a Cloud  Identity  domain.  The  can  do  what  part is  defined by  an IAM  role. An  IAM  role  is  a collection  of permissions.
There  are  three  kinds of roles in Cloud IAM. Primitive  roles  can be applied to  a  GCP  project  and  they affect  all  resources  in that  project. These  are  the  owner,  editor, and  viewer  roles. A viewer  can examine  a given resource but  not  change  it's  state.  If you're  an  editor,  you  can  do  everything  a viewer  can  do, plus  change  its  state. And  owner can  do  everything  an editor  can  do, plus manage  rolls and  permissions  on  the  resource.  The  owner role  can set  up  billing.  Often, companies  want  someone  to  be  able  to  control the  billing  for  a  project without  the  right  to  change  the  resources  in  the project. And  that's why  we  can  grant  someone  the billing  administrator role.

IAM Roles

InstantAdmin  Role lets  whoever  has that  role  perform  a  certain  set  of actions  on virtual  machines.  The  actions are  listing  compute engines, reading  and  changing  their configurations,  and  starting and  stopping  them. We must manage  permissions for custom roles.  Some  companies decide they'd  rather  stick  with the  predefined roles.  Custom  roles can  only  be  used  at  the project  or  organization  levels. They  can't  be  used  at  the  folder  level. Service  accounts  are  named with an  email  address. But instead  of passwords, they  use cryptographic keys  to  access resources.


Be that one you always wanted to be.. 

Saturday 23 March 2019

Hadoop : Part - 2


When to use Hadoop


  • Support for multiple frameworks: hadoop can be integrated with multiple analytical tools like R and Python for Analytics and visualisation, Python and Spark for real-time processing, MongoDB and HBase for NoSQL database, Pentaho for BI etc
  • Data size and Data diversity
  • Lifetime data availability due to scalability and fault tolerance.

Hadoop Namenode failover process

In a High Availability cluster, two separate machines are configured as NameNodes. One of the NameNodes is in an Active state and the other is in a Standby state. The Active NameNode is responsible for all client operations in the cluster, while the Standby is simply acting as a slave.
In order for the Standby node to keep its state synchronized with the Active node, both nodes communicate with a group of separate daemons called “JournalNodes” (JNs). When any namespace modification is performed by the Active node, it durably logs a record of the modification to a majority of these JNs. The Standby node is capable of reading the edits from the JNs, and is constantly watching them for changes to the edit log. As the Standby Node sees the edits, it applies them to its own namespace. 

In the event of a failover, the Standby will ensure that it has read all of the edits from the JounalNodes before promoting itself to the Active state. Standby node have up-to-date information regarding the location of blocks in the cluster. In order to achieve this, the DataNodes are configured with the location of both NameNodes, and send block location information and heartbeats to both.
During a failover, the NameNode which is to become active will simply take over the role of writing to the JournalNodes, which will effectively prevent the other NameNode from continuing in the Active state.

Ways to rebalance the cluster when new datanodes are added


  • Select a subset of files that take up a good percentage of your disk space; copy them to new locations in HDFS; remove the old copies of the files; rename the new copies to their original names.
  • Way, with no interruption of service, is to turn up the replication of files, wait for transfers to stabilize, and then turn the replication back down.
  • Turn off the data-node, which is full, wait until its blocks are replicated, and then bring it back again. The over-replicated blocks will be randomly removed from different nodes.Execute the bin/start-balancer.sh command to run a balancing process to move blocks around the cluster automatically.

Actual data storage locations for NameNode and DataNode

A list of comma separated pathnames can be specified as dfs.datanode.data.dir for data storage in datanodes. The dfs.namenode.name.dir parameter is used to specify the namenode directories to store data.
Limiting DataNode's disk usage
The configuration dfs.datanode.du.reserved configuration in $HADOOP_HOME/conf/hdfs-site.xml can be used to limit disk usage.

Removing datanodes from a cluster

Removing one or two data-nodes will not lead to any data loss, because name-node will replicate their blocks as long as it will detect that the nodes are dead.
Hadoop offers the decommission feature to retire a set of existing data-nodes. The nodes to be retired should be included into the exclude file, and the exclude file name should be specified as a configuration parameter dfs.hosts.exclude. Specify the full hostname, ip or ip:port format in this file. Then the shell command
bin/hadoop dfsadmin -refreshNodes
should be called, which forces the name-node to re-read the exclude file and start the decommission process.
The decommission progress can be monitored on the name-node Web UI. Until all blocks are replicated the node will be in "Decommission In Progress" state. When decommission is done the state will change to "Decommissioned".

Files and block sizes

HDFS provides API to specify block size when creating a file. Hence multiple files can have different block sizes. FileSystem.create(path,overwrite, bufferSize,replication,blockSize,progress)

Hadoop streaming

Hadoop has a generic API for writing map reduce programs in any desired programming language like Python, Ruby, Perl etc. This is called Hadoop streaming.

Inter cluster data copy

Hadoop provides distCP(distributed copy) command to copy data across different Hadoop clusters.


Be the best thing that ever happen to everyone 

Saturday 16 March 2019

Hadoop : Part - 1


History
Hadoop was created by Doug Cutting and Mike Cafarella in 2005. Doug was working at Yahoo at that time and is now Chief Architect of Cloudera. Hadoop was named after his son's toy elephant.
Hadoop
Apache Hadoop is a framework that provides various tools to store and process Big Data. It helps in analyzing Big Data and making business decisions. Hadoop stands for High Availability Distributed Object Oriented Platform.
Latest version
The latest version of Hadoop is 3.1.2 released on Feb 6, 2019
Companies using Hadoop
Cloudera, Amazon Web Services, IBM, Hortonworks, Intel, Microsoft etc
Top vendors offering Hadoop distribution
Cloudera, HortonWorks, Amazon Web Services Elastic MapReduce Hadoop Distribution, Microsoft, MapR, IBM etc
Advantages of Hadoop distributions
  • Technical Support
  • Consistent with patches, fixes and bug detection
  • Extra components for monitoring
  • Easy to install 
Modes of Hadoop
Hadoop can run in three modes:
  • Standalone- Default mode of Hadoop. It uses local file system for input and output operations. It is much faster when compared to other modes and is mainly used for debugging purpose.
  • Pseudo distributed(Single Node Cluster)- In this case all daemons are running on one node and thus both Master and Slave node are the same.
  • Fully distributed(Multiple Node Cluster)- Here separate nodes are allotted as Master and Slave. The data is distributed across several nodes on Hadoop cluster.

Main components of Hadoop
There are two main components namely:
  • Storage unit– HDFS
  • Processing framework– YARN

HDFS
HDFS (Hadoop Distributed File System) is the storage unit of Hadoop. It is responsible for storing different kinds of data in a distributed environment. It follows master - slave architecture.
Components of HDFS
  • NameNode: NameNode is the master node which is responsible for storing the metadata of all the files and directories such as block location, replication factors etc. It has information about blocks, that make a file, and where those blocks are located in the cluster. NameNode uses two files for storing the metadata namely:
Fsimage- It keeps track of the latest checkpoint of the namespace.
Edit log- It is the log of changes that have been made to the namespace since checkpoint.

  • DataNode: DataNodes are the slave nodes, which are responsible for storing data in the HDFS. NameNode manages all the DataNodes.

YARN
YARN (Yet Another Resource Negotiator) is the processing framework in Hadoop, which manages resources and provides an execution environment to the processes.
Components of YARN
  • ResourceManager: It receives the processing requests, and then passes the requests to corresponding NodeManagers accordingly, where the actual processing takes place. It allocates resources to applications based on the needs. It is the central authority that manages resources and schedule applications running on top of YARN.
  • NodeManager: NodeManager is installed on every DataNode and it is responsible for the execution of the task on every DataNode. It runs on slave machines, and is responsible for launching the application’s containers (where applications execute their part), monitoring their resource usage (CPU, memory, disk, network) and reporting these to the ResourceManager.
Hadoop daemons
Hadoop daemons can be broadly divided into three namely:

  • HDFS daemons- NameNode, DataNode, Secondary NameNode
  • YARN daemons- ResourceManager, NodeManager
  • JobHistoryServer
Secondary NameNode
It periodically merges the changes (edit log) with the FsImage (Filesystem Image), present in the NameNode. It stores the modified FsImage into persistent storage, which can be used in case of failure of NameNode.

JobHistoryServer
It maintains information about MapReduce jobs after the Application Master terminates.

He Who has a Why to live for, can bear almost any How