Monday, 9 March 2020

DataOps

Way to DataOps

DataOps focuses on the end-to-end delivery of data. In the digital era, companies need to harness their data to derive competitive advantage. In addition, companies across all industries need to comply with new data privacy regulations. The need for DataOps can be summarised as follows:
  • More data is available than ever before
  • More users want access to more data in more combinations

DataOps

When development and operations don’t work in concert, it becomes hard to ship and maintain quality software at speed. This led to the need for DevOps. 
DataOps is similar to DevOps, but centered around the strategic use of data, as opposed to shipping software. DataOps is an automated process oriented methodology used by Big Data teams to improve the quality and reduce the cycle time of Data Analytics. It applies to the entire data life cycle from data preparation to reporting.
It includes automating different stages of the work flow including BI, Data Science and Analytics.DataOps speeds up the production of applications running on Big Data processing frameworks. 

Components 

DataOps include the following components:

  • Data Engineering
  • Data Integration
  • Data security
  • Data Quality

DevOps and DataOps

  • DevOps is the collaboration between Developers, Operations and QA Engineers across the entire Application Delivery pipeline, from Design and Coding, to Testing and Production Support. While DataOps is a Data Management method that emphasizes communication, collaboration, integration and automation of processes between Data Engineers, Data Scientists and other Data professionals.
  • DevOps mission is to enable Developers and Managers to handle modern web based Application development and deployment. DataOps enables data professionals to optimize modern web based data storage and analytics.
  • DevOps focuses on continuous delivery by leveraging on demand IT resources and by automating Testing and Deployment. While DataOps tries to bring the same improvements to  Data Analytics.

Steps to implement DataOps

The following are the 7 steps to implement DataOps:
  1. Add Data and logic tests
  2. Use a version control system 
  3. Branch and Merge Codebase
  4. Use multiple environments 
  5. Reuse and containerize
  6. Parameterize processing 
  7. Orchestrate data pipelines 

The above details are based on the learnings gathered from different Internet sources. 



Keep going, because you didn't come this far to come only this far.. 

Sunday, 1 March 2020

DevOps

DevOps is a Software Engineering practice that aims at unifying Software Development and Operation. As the name implies, it is a combination of Development and Operations. The main phases in each of these can be described as follows:
  • Dev: Plan - - > Create - - > Verify - - > Package
  • Ops: Release - - > Configure - - > Monitor

DevOps Culture 

Devops is often described as a culture. Hence it consists of different aspects such as:
  • Engineer EmpowermentIt gives engineers more responsibility over the typical application life cycle process starting from Development, Testing, Deployment, Monitoring and Be On Call.
  • Test Driven DevelopmentIt is the practice of writing Tests before writing code. This will help increase the quality of service and gives developers more confidence for faster and frequent code releases.
  • AutomationIt involves the concept of automating everything that can be automated. This includes Test Automation, Infrastructure Automation, Deployment Automation etc.
  • MonitoringThis is the process of building monitoring alerts and monitoring the applications.

Challenges in DevOps

Challenges DevOps Solution
Dev Challenges
Waiting time for Code Deployment Continuous Integration ensures there is quick deployment of code, faster testing and speedy feedback
Pressure of work on old code Since there is no waiting time to deploy the code, the developer focusses on building the current code
Ops Challenges
Difficult to maintain uptime of production environment Containerization or Virtualization ensures that there is a simulated environment created to run the software containers and also offer great reliability for application uptime
Tools to automate infrastructure management are not effective Configuration management helps to organize and execute configuration plans, consistently provision the system and proactively manage the infrastructure.
Number of servers to be monitored increases and hence it is difficult to diagnose the issues. Continuous monitoring and feedback system is established through DevOps. Thus effective administration is assured

Periodic Table of DevOps Tools



Popular DevOps Tools

Some of the most popular DevOps tools are:
  • Git: Git is an open source, distributed and the most popular software versioning system. It works on client server model. Code can be downloaded from main repository simultaneously by various clients or developers.
  • Maven: Maven is build automation tool. It automates software build process & dependencies resolution. A Maven project is configured using a project object model or pom.xml file.
  • Ansible: Ansible is an open source application which is used for automated software provisioning, configuration management and application deployment. Ansible helps in controlling an automated cluster environment consisting of many machines.
  • Puppet: Puppet is an open source software configuration management, automated provisioning tool. It is an alternative to Ansible and provides better control over client machines. Puppet comes up with GUI which makes it easy to use than Ansible.
  • Docker: Docker is a containerization technology. Containers consist of all the applications with all of its dependencies. These containers can be deployed on any machine without caring about underlying host details.
  • Jenkins: Jenkins is an open source automation server written in java. Jenkins is used in creating continuous delivery pipelines.
  • Nagios: Nagios is used for continuous monitoring of infrastructure. Nagios helps in monitoring server, application and network. It provides a GUI interface to check various details like memory utilisation, fan speed, routing tables of switches, or state of SQL server.
  • Selenium: This is an open source automation testing framework used for automating the testing of web applications. Selenium is not a single tool but a suite of tools. There are four components of Selenium – Selenium IDE, RC, WebDriver, and Grid. Selenium is used to repeatedly execute testcases for applications without manual intervention and generate reports.
  • Chef: Chef is a configuration management tool. Chef is used to manage configuration like creating or removing a user, adding SSH key to a user present on multiple nodes, installing or removing a service, etc.
  • Kubernetes: Kubernetes is an open source container orchestration tool. It is developed by Google. It is used in continuous deployment and auto scaling of container clusters. It increases fault tolerance, load balancing in a container cluster.

References 

https://xebialabs.com/periodic-table-of-devops-tools/

Can DevOps can be incorporated in Data Management?
More details regarding the same will be discussed in the next post.

Also Thank you to one of my former colleagues for inspiring me to write a post on this hot topic.



She woke up every morning with the option of being anyone she wished, how beautiful it was that she always choose herself... 

Saturday, 11 January 2020

Quantum Computing

Classical computers are composed of registers and memory. The collective contents of these are often referred to as state. Instructions for a classical computer acts based on this state. It consists of long string of bits, which encode either a zero or a one.
The world is running out of computing capacity. As Moore's law states, the number of transistors on a microprocessor continues to double every 18 months, hence it is high time for the introduction of atom sized microprocessors. So the next logical step will be to create quantum computers which will harness the power of atoms and molecules to perform the processing tasks.
The word quantum was derived from Latin, meaning 'how great' or 'how much'. The discovery that particles are discrete packets of energy with wave like properties led to the branch of Physics called Quantum Mechanics.
Quantum Computing is the usage of quantum mechanics to process information.

Qubits

The main advantage of quantum computers is that they aren't limited to two states since they use Qubits or Quantum Bits instead of bits.
Qubits represent atoms, ions, electrons or protons and their respective control devices that are working together to act as computer memory. Because a quantum computer can contain multiple states simultaneously, it has the potential to be millions of times more powerful than super computers.
Qubits are very hard to manipulate, any disturbance causes them to fall out of their quantum state. This is called decoherance. The field of quantum error correction examines the different ways to avoid decoherance.

Super position and Entanglement

Super position is the feature that frees up from binary constraints. It is the ability of a quantum system to be in multiple states at the same time.
Entanglement is an extremely strong correlation that exists between quantum particles. It is so strong that two or more quantum particles will be linked perfectly, even if separated by great distances.
A classical computer works with ones and zeroes, a quantum computer will have the advantage of using ones, zeroes and 'super positions of ones and zeroes' . This is why a quantum computer can process a vast number of calculations simultaneously. 

Error Correcting Codes

Quantum computers outperform classical computers on certain problems. But efforts to build them have been hampered by the fragility of qubits. This is because they are easily affected by heat and electro magnetic radiation. Error Correcting Codes are used for this. 

Quantum Computer 

Computation device that make use of quantum mechanical phenomena, such as Super position and Entanglement to perform operations on data is termed as Quantum Computer. Such a device is probabilistic rather than deterministic. So it returns multiple answers for a given problem and thereby indicates the confidence level of the computer. 

Applications of Quantum Computers 

1. They are great for solving optimization problems. 
2. They can easily crack encryption algorithms. 
3. Machine learning tasks such as NLP, image recognition etc

Quantum Programming Language(QPL) 

Quantum Programming Language is used to write programs for quantum computer. Quantum computer language is the most advanced implemented QPL. 
The basic built-in quantum data type in Quantum Computer Language is qreg(quantum register) which can be interpreted as an Array of qubits(quantum bits).

Interesting facts 

Quantum computers require extremely cold temperatures, as sub-atomic particles must be as close as possible to a stationary state to be measured. The cores of D-wave quantum computers operate at -460 degrees F or -273 degrees C which is 0.02 degrees away from absolute zero. 

Google's Quantum Supremacy

Quantum Supremacy or Quantum Eclipse is the way of demonstrating that a quantum computer is able to perform a calculation that is impossible for a classical one. 
Google's quantum computer called 'Sycamore' consist of only 54 qubits. It was able to complete a task called Random Circuit Problem. Google says Sycamore was able to find the answer in just a few minutes whereas it would take 10,000 years on the most powerful super computer. 

Amazon Braket

AWS announced New Quantum Computing Service (Amazon Braket) along with AWS Center for Quantum Computing and Amazon Quantum Solutions Lab at AWS re:Invent on 2 December, 2019. Amazon Braket is a fully managed service that helps to get started with quantum computing by providing a development environment to explore and design quantum algorithms, test them on quantum computers, and run them on quantum hardware.

References

Information collected from various sources on the Internet.


At times an impulse is required to trigger an action. Special Thanks to Amazon Braket for being an irresistible impulse to make me write this blog on Quantum Computing. 






Make everyday our Master Piece, 
Happy 2020!!


Saturday, 27 July 2019

Exploring Cassandra: Part- 1

Cassandra is an open source column family NoSQl database that is scalabale to handle massive volumes of data stored across commodity nodes. 

Why Cassandra?

Consider a scenario where we need to store large amounts of log data. Millions of log entries will be written everyday. It also requires a server with zero downtime.

Challenges with RDBMS


  • Cannot efficiently handle huge volumes of data
  • Difficult to serve users worldwide with the ceyralized single node model
  • Server with zero downtime

Using Cassandra


  • It is highly scalable and hence can handle large amounts of data
  • Most appropriate for write heavy work loads
  • Can handle millions of user requests per day
  • Can continue working even when nodes are down
  • Supports wide rows with a very flexible schema wherein all rows need not have the same number of columns

Cassandra Vs RDBMS



Cassandra Architecture


  • Cassandra follows a peer to peer master less architecture. So all the nodes in the cluster are considered equal.
  • Data is relicated on multiple nodes so as to ensure fault tolerance and high availability.
  • The node that recives client request is called the coordinator. The coordinator forwards the request to the appropriate node responsible for the given row key
  • Data Center: Collection of related nodes
  • Node: Place where the data is stored
  • Cluster: It contains one or more nodes

Applications of Cassandra


  • Suitable for high velocity data from sensors
  • Useful ot store time series data
  • Social media networking sites use Cassandra for analysis and recommendation of products to their customers
  • Preferred by companies providing messaging services for managing massive amounts of data



All good things are difficult to achieve; and bad things are very easy to get.

Sunday, 26 May 2019

Google Cloud Platform(GCP) : Part- 3

Interacting with GCP

There are four ways we can interact with Google Cloud Platform.

  • Console: The GCP Console is a web-based administrative interface. It lets us view and manage all the projects and all the resources they use. It also lets us enable, disable and explore the APIs of GCP services. And it gives us access to Cloud Shell. The GCP Console also includes a tool called the APIs Explorer that helps to learn about the APIs interactively. It let's us see what APIs are available and in what versions. These APIs expect parameters and documentation on them is built in.
  • SDK and Cloud Shell: Cloud Shell is a command-line interface to GCP that's easily accessed from the browser. From Cloud Shell, we can use the tools provided by the Google Cloud Software Development kit SDK without having to first install them somewhere. The Google Cloud SDK is a set of tools that we can use to manage our resources and applications on GCP. These include the gcloud tool which provides the main command line interface for Google Cloud Platform products and services. There's also gsutil which is for Google Cloud Storage and bq which is for BigQuery. The easiest way to get to the SDK commands is to click the Cloud Shell button on a GCP Console. We can also install the SDK on our own computers, our on-premise servers of virtual machines and other clouds. The SDK is also available as a docker image.
  • Mobile App
  • APIs: The services that make up GCP offer Restful application programming interfaces so that the code we write can control them. The GCP Console lets us turn on and off APIs. Many APIs are off by default, and many are associated with quotas and limits. These restrictions help protect us from using resources inadvertently. We can enable only those APIs we need and we can request increases in quotas when we need more resources.

Cloud Launcher

Google Cloud Launcher is a tool for quickly deploying functional software packages on Google Cloud platform. GCP updates the base images for the software packages to fix critical issues and vulnerabilities, but it doesn't update the software after it's been deployed.

Virtual Private Cloud(VPC)

Virtual machines  have the power in generality of a full-fledged operating system in each system. We can segment our networks, use firewall rules to restrict access to instances, and create static routes to forward traffic to specific destinations. Virtual Private Cloud networks that we define have global scope. We can dynamically increase the size of a subnet in a custom network by expanding the range of IP addresses allocated to it. 
Features of VPCs are:

  • VPCs have routing tables. These areused to forward traffic from one instance to another instance within the same network. Even across sub-networks and even between GCP zones without requiring an external IP address.
  • VPCs give us a global distributed firewall. We can control to restrict access to instances both incoming and outgoing traffic.
  • Cloud Load Balancing is a fully distributed software defined managed service for all our traffic. With Cloud Load Balancing, a single anycast IP front ends all our backend instances in regions around the world. It provides cross region load balancing, including automatic multi region failover, which gently moves traffic in fractions if backends become unhealthy. Cloud Load Balancing reacts quickly to changes in users, traffic, backend health, network conditions, and other related conditions.
  • Cloud DNS is a managed DNS service running on the same infrastructure as Google. It has low latency and high availability and it's a cost effective way to make our applications and services available to our users. The DNS information we publish is served from redundant locations around the world. Cloud DNS is also programmable. We can publish and manage millions of DNS zones and records using the GCP console, the command line interface or the API. Google has a global system of edgecaches. We can use this system to accelerate content delivery in your application using Google Cloud CDN.
  • Cloud router lets our other networks and our Google VPC exchange route information over the VPN using the Border Gateway Protocol.
  • Peering means putting a router in the same public data center as a Google point of presence and exchanging traffic. One downside of peering though is that it isn't covered by a Google service level agreement. Customers who want the highest uptimes for their interconnection with Google should use dedicated interconnect in which customers get one more direct private connections to Google. If these connections have topologies that meet Google's specifications, they can be covered by up to a 99.99 percent SLA.

Compute Engine

Computer engine lets us create and run virtual machines on Google infrastructure. We can create a virtual machine instance by using the Google cloud platform console or the gcloud command line tool. Once our VMs are running, it's easy to take a durable snapshot of their discs. We can keep these as backups or use them when we need to migrate a VM to another region. A preemptible VM is different from an ordinary compute engine VM in only one respect. We've given compute engine permission to terminate it if its resources are needed elsewhere. We can save a lot of money with preemptible VMs. Compute engine has a feature called auto scaling that lets us add and take away VMs from our application based on load metrics.

Don't limit your challenges. Challenge your limits..

Sunday, 14 April 2019

Hadoop : Part - 5


Security in Hadoop

Apache Hadoop achieves security by using Kerberos.
At a high level, there are three steps that a client must take to access a service when using Kerberos.

  • Authentication – The client authenticates itself to the authentication server. Then, receives a timestamped Ticket-Granting Ticket (TGT).
  • Authorization – The client uses the TGT to request a service ticket from the Ticket Granting Server.
  • Service Request – The client uses the service ticket to authenticate itself to the server.

Concurrent writes in HDFS

Multiple clients cannot write into an HDFS file at same time. Apache Hadoop HDFS follows single writer multiple reader models. The client which opens a file for writing, the NameNode grant a lease. Now suppose, some other client wants to write into that file. It asks NameNode for the write operation. NameNode first checks whether it has granted the lease for writing into that file to someone else or not. When someone already acquires the lease, then, it will reject the write request of the other client.

fsck

fsck is the File System Check. HDFS use the fsck (filesystem check) command to check for various inconsistencies. It also reports the problems like missing blocks for a file or under-replicated blocks. NameNode automatically corrects most of the recoverable failures. Filesystem check can run on the whole file system or on a subset of files.

Datanode failures

NameNode periodically receives a Heartbeat and a Blockreport from each of the DataNodes in the cluster. Receipt of a Heartbeat implies that the DataNode is functioning properly. A Blockreport contains a list of all blocks on a DataNode. When NameNode notices that it has not recieved a hearbeat message from a data node after a certain amount of time, the data node is marked as dead. Since blocks will be under replicated the system begins replicating the blocks that were stored on the dead datanode. The NameNode Orchestrates the replication of data blocks from one datanode to another. The replication data transfer happens directly between datanodes and the data never passes through the namenode.

Taskinstances

Task instances are the actual MapReduce jobs which are run on each slave node. Each Task Instance runs on its own JVM process. There can be multiple processes of task instance running on a slave node.

Communication to HDFS

The Client communication to HDFS happens using Hadoop HDFS API. Client applications talk to the NameNode whenever they wish to locate a file. The NameNode responds the successful requests by returning a list of relevant DataNode servers where the data lives. Client applications can talk directly to a DataNode, once the NameNode has provided the location of the data.

HDFS block and Inputsplit

Block is the physical representation of data while split is the logical representation of data present in the block.

Hadoop federation

HDFS Federation enhances an existing HDFS architecture. Hadoop Federation uses many independent Namenode/namespaces to scale the name service horizontally. It separates the namespace layer and the storage layer. Hence HDFS federation provides Isolation, Scalability and simple design.



Don't Give Up. The beginning is always the hardest but life rewards those who work hard for it.

Saturday, 6 April 2019

Hadoop : Part - 4


Speculative Execution

Instead of identifying and fixing the slow-running tasks, Hadoop tries to detect when the task runs slower than expected and then launches other equivalent task as backup. This backup mechanism in Hadoop is Speculative Execution.

Heartbeat in HDFS

A heartbeat is a signal indicating that it is alive. A datanode sends heartbeat to Namenode.

Hadoop archives

Hadoop Archives (HAR) offers an effective way to deal with the small files problem.
Hadoop Archives or HAR is an archiving facility that packs files in to HDFS blocks efficiently and hence HAR can be used to tackle the small files problem in Hadoop.
hadoop archive -archiveName myhar.har /input/location /output/location
Once a .har file is created, you can do a listing on the .har file and you will see it is made up of index files and part files. Part files are nothing but the original files concatenated together in to a big file. Index files are look up files which is used to look up the individual small files inside the big part files.
hadoop fs -ls /output/location/myhar.har
/output/location/myhar.har/_index
/output/location/myhar.har/_masterindex
/output/location/myhar.har/part-000000

Reason for setting HDFS blocksize as 128MB

The block size is the smallest unit of data that a file system can store. If the blocksize is smaller it requires multiple lookups on namenode to locate the file. HDFS is meant to handle large files. If the blocksize is 128MB, then the number of requests goes down, greatly reducing the cost of overhead and load on the Name Node.

Data Locality in Hadoop

Data locality refers to the ability to move the computation close to where the actual data resides on the node, instead of moving large data to computation. This minimizes network congestion and increases the overall throughput of the system.

Safemode in Hadoop

Safemode in Apache Hadoop is a maintenance state of NameNode. During which NameNode doesn’t allow any modifications to the file system. During Safemode, HDFS cluster is in read-only and doesn’t replicate or delete blocks.

Single Point of Failure

In Hadoop 1.0, NameNode is a single point of Failure (SPOF). If namenode fails, all clients would unable to read/write files.
Hadoop 2.0 overcomes this SPOF by providing support for multiple NameNode. This feature provides  If active NameNode fails, then Standby-Namenode takes all the responsibility of active node.
some deployment requires high degree fault-tolerance. So new version 3.0 enable this feature by allowing the user to run multiple standby namenode.



Strive for excellence and success will follow you..