Saturday, 20 March 2021

Setup of Spark Scala Program for WordCount in Windows10

Install Spark:

Download latest version of Spark from: https://www.apache.org/dyn/closer.lua/spark/spark-3.1.1/spark-3.1.1-bin-hadoop3.2.tgz

Unzip the same to a directory(Eg: C:/Program Files/)

Install Scala:

Spark 3.1.1 is compatible with Scala 3.12.10

Download the Windows binaries from https://www.scala-lang.org/download/2.12.10.html

Set up of Environment Variables:

SPARK_HOME: c:\progra~1\spark\spark-3.1.1-bin-hadoop3.2

SCALA_HOME: c:\progra~1\spark\spark-3.1.1-bin-hadoop3.2

Path: %Path%;%SCALA_HOME%\bin;%SPARK_HOME%\bin;

Download the sample code from: https://github.com/anjuprasannan/WordCountExampleSpark

Configuring Scala project in IntelliJ: https://docs.scala-lang.org/getting-started/intellij-track/getting-started-with-scala-in-intellij.html

Add Maven support by following the steps at: https://www.jetbrains.com/help/idea/convert-a-regular-project-into-a-maven-project.html

Modify the pom.xml file as per the Git repository.

Build the project as: mvn clean install

Edit the input and output directories in https://github.com/anjuprasannan/WordCountExampleSpark/blob/main/src/main/scala/WordCount.scala [Note that the output location should be a non existant directory.]

Execute the Application as Right click WordCount -> Run 'WordCount'

You can see the output directory created with the result.



"A life spent making mistakes is not only more honorable, but more useful than a life spent doing nothing."


Wednesday, 17 March 2021

Setup of Map Reduce Program for WordCount in Windows10

Java Download: https://www.oracle.com/in/java/technologies/javase/javase-jdk8-downloads.html

Maven Download: https://maven.apache.org/download.cgi

Maven installation: https://maven.apache.org/install.html

Eclipse download and installation: https://www.eclipse.org/downloads/download.php?file=/technology/epp/downloads/release/2020-12/R/eclipse-java-2020-12-R-win32-x86_64.zip

Hadoop Download:https://hadoop.apache.org/release/3.2.1.html

winutils: https://github.com/cdarlint/winutils/blob/master/hadoop-3.2.1/bin/winutils.exe. Download and copy to bin folder under hadoop

System Variables setup:

JAVA_HOME: C:\Program Files\Java\jdk1.8.0_281

HADOOP_HOME: C:\Program Files\hadoop\hadoop-3.2.1

Path: %PATH%;%JAVA_HOME%\bin;C:\Program Files\apache-maven-3.6.3\bin;%HADOOP_HOME%\sbin;

Map Reduce Code Setup:

Map Reduce Code: https://github.com/anjuprasannan/MapReduceExample

checkout the code from Git as:

git clone https://github.com/anjuprasannan/MapReduceExample.git

Import the project to Eclipse

Edit the input, output and hadoop home directory locations in "/MapReduceExample/src/com/anjus/mapreduceexample/WordCount.java" [Note that the output location should be a non existant directory.]

Build the project:

Right click project -> Maven Clean

Right click project -> Maven Install

Job Execution:

Right Click on "/MapReduceExample/src/com/anjus/mapreduceexample/WordCount.java" -> Run As -> Java Application



"Don't be afraid to give up the good to go for the great."





Monday, 18 January 2021

Particle Swarm Optimization

Artificial intelligence (AI) is the intelligence exhibited by machines. It is defined as “the study and design of intelligent agents”, where an intelligent agent represents a system that perceives its environment and takes action that maximizes its success chance. AI research is highly technical and specialized and is deeply divided into subfields that often fail to communicate with each other. Currently popular approaches of AI include traditional statistical methods, traditional symbolic AI, and computational intelligence (CI). CI is a fairly new research area. It is a set of nature-inspired computational methodologies and approaches to address complex real-world problems to which traditional approaches are ineffective or infeasible. CI includes artificial neural network (ANN), fuzzy logic, and evolutionary computation (EC).

Swarm intelligence (SI) is a part of EC. It researches the collective behavior of decentralized, self-organized systems, natural or artificial. Typical SI systems consist of a population of simple agents or boids interacting locally with one another and with their environment. The inspiration often comes from nature, especially biological systems. The agents in a SI system follow very simple rules. There is no centralized control structure dictating how individual agents should behave. The agents’ real behaviors are local, and to a certain degree random; however, interactions between such agents lead to the emergence of “intelligent” global behavior, which is unknown to the individual agents. Well-known examples of SI include ant colonies, bird flocking, animal herding, bacterial growth, and fish schooling.

Self-organization is a key feature of SI system. It is a process where global order or coordination arises out of the local interactions between the components of an initially disordered system. This process is spontaneous; that is, it is not controlled by any agent inside or outside of the system. The self-organization in swarms are interpreted through three basic ingredients as follows.

(1) Strong dynamical nonlinearity (often involving positive and negative feedback): positive feedback helps promote the creation of convenient structures, while negative feedback counterbalances positive feedback and helps to stabilize the collective pattern.

(2) Balance of exploitation and exploration: SI identifies a suitable balance to provide a valuable mean artificial creativity approach.

(3) Multiple interactions: agents in the swarm use information coming from neighbor agents so that the information spreads throughout the network.

Particle Swarm Optimization (PSO) is a technique used to explore the search space of a given problem to find the settings or parameters required to maximize a particular objective. In computational science, particle swarm optimization (PSO) is a computational method that optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Particle Swarm Optimization (PSO), a population based technique for stochastic search in a multidimensional space, has so far been employed successfully for solving a variety of optimization problems including many multifaceted problems, where other popular methods like steepest descent, gradient descent, conjugate gradient, Newton method, etc. do not give satisfactory results. This technique was first described by James Kennedy and Russell C. Eberhart in 1995. The algorithm is inspired by social behavior of bird flocking and fish schooling. Suppose a group of birds is searching for food in an area and only one piece of food is available. Birds do not have any knowledge about the location of the food. But they know how far the food is from their present location. So the best strategy to locate the food is to follow the bird nearest to the food.

A flying bird has a position and a velocity at any time t. In search of food, the bird changes its position by adjusting the velocity. The velocity changes based on its past experience and also the feedbacks received from neighbouring birds. This searching process can be artificially simulated for solving non-linear optimization problem. So this is a population based stochastic optimization technique inspired by social behaviour of bird flocking or fish schooling. In this algorithm each solution is considered as bird, called particle. All the particles have a fitness value. The fitness values can be calculated using objective function. All the particles preserve their individual best performance and they also know the best performance of their group. They adjust their velocity considering their best performance and also considering the best performance of the best particle.

The usual aim of the particle swarm optimization (PSO) algorithm is to solve an unconstrained minimization problem: find x* such that f(x*)<=f(x) for all d-dimensional real vectors x. The objective function f: Rd-> R is called the fitness function. Each particle i in Swarm Topology has its neighborhood Ni (a subset of P). The structure of the neighborhoods is called the swarm topology, which can be represented by a graph. Usual topologies are: fully connected topology and circle topology.

The initial state of a four-particle PSO algorithm seeking the global maximum in a one-dimensional search space. The search space is composed of all the possible solutions. The PSO algorithm has no knowledge of the underlying objective function, and thus has no way of knowing if any of the candidate solutions are near to or far away from a local or global maximum. The PSO algorithm simply uses the objective function to evaluate its candidate solutions, and operates upon the resultant fitness values. Each particle maintains its position, composed of the candidate solution and its evaluated fitness, and its velocity. Additionally, it remembers the best fitness value it has achieved thus far during the operation of the algorithm, referred to as the individual best fitness, and the candidate solution that achieved this fitness, referred to as the individual best position or individual best candidate solution. Finally, the PSO algorithm maintains the best fitness value achieved among all particles in the swarm, called the global best fitness, and the  candidate solution that achieved this fitness, called the global best position or global best candidate solution.

The PSO algorithm consists of just three steps, which are repeated until a stopping condition is met:

1. Evaluate the fitness of each particle

2. Update individual and global best fitnesses and positions

3. Update velocity and position of each particle

The first two steps are fairly trivial. Fitness evaluation is conducted by supplying the candidate solution to the objective function. Individual and global best fitnesses and positions are updated by comparing the newly evaluated fitnesses against the previous individual and global best fitnesses, and replacing the best fitnesses and positions as necessary. The velocity and position update step is responsible for the optimization ability of the PSO algorithm. Once the velocity for each particle is calculated, each particle’s position is updated by applying the new velocity to the particle’s previous position. This process is repeated until some stopping condition is met. Some common stopping conditions include: a preset number of iterations of the PSO algorithm, a number of iterations since the last update of the global best candidate solution, or a predefined target fitness value.

The same can be described in detailed steps as follows:

Step: 1 ==> Initialize particles

Step: 2 ==> Evaluate fitness of each particles

Step: 3 ==> Modify velocities based on previous best and global best positions

Step: 4 ==> Terminate criteria, if criteria is not satisfied then go to Step: 2.

Step: 5 ==> Stop

Unlike Genetic Algorithms(GA), PSOs do not change the population from generation to generation, but keep the same population, iteratively updating the positions of the members of the population (i.e., particles). PSOs have no operators of “mutation”, “recombination”, and no notion of the “survival of the fittest”. On the other hand, similarly to GAs, an important element of PSOs is that the members of the population “interact”, or “influence” each other.

PSO has several advantages, including fast convergence, few setting parameters, and simple and easy implementation; hence, it can be used to solve nonlinear, non differentiable, and multipeak optimization problems, particularly in science and engineering fields. As a powerful optimization technique, PSO has been extensively applied in different geotechnical engineering aspects such as slope stability analysis, pile and foundation engineering, rock and soil mechanics, and tunneling and underground space design. The fitness function can be non-differentiable (only values of the fitness function are used). The method can be applied to optimization problems of large dimensions, often producing quality solutions more rapidly than alternative methods.

The disadvantages of particle swarm optimization (PSO) algorithm are that it is easy to fall into local optimum in high-dimensional space and has a low convergence rate in the iterative process. There is no general convergence theory applicable to practical, multidimensional problems. For satisfactory results, tuning of input parameters and experimenting with various versions of the PSO method is sometimes necessary. Stochastic variability of the PSO results is very high for some problems and some values of the parameters. Also, some versions of the PSO method depend on the choice of the coordinate system. 

To address above mentioned problems, many solutions are present and can be divided into the following three types.

(i) Major modifications, including quantum-behaved PSO, bare-bones PSO, chaotic PSO, fuzzy PSO, PSOTVAC, OPSO, SPSO, and topology.

(ii) Minor modifications, including constriction coefficient, velocity clamping, trap detection, adaptive parameter, fitness-scaling, surrogate modeling, cooperative 

 mechanism, boundary shifting, position resetting, entropy map, ecological behavior, jumping-out strategy, preference strategy, neighborhood learning, and local search.

(iii) Hybridization, PSO being hybridized with GA, SA, TS, AIS, ACO, HS, ABC, DE, and so forth.

The modifications of PSO are: QPSO(quantum-behaved PSO), BBPSO(bare-bones PSO), CPSO(chaotic PSO), FPSO(fuzzy PSO), AFPSO(adaptive FPSO), IFPSO(improved FPSO), PSO with time-varying acceleration coefficients(PSOTVAC), OPSO(opposition-based PSO) and SPSO(standard PSO).

The application categories of PSO are “electrical and electronic engineering,” “automation control systems,” “communication theory,” “operations research,” “mechanical engineering,” “fuel and energy,” “medicine,” “chemistry,” “biology”.

Reference: Information collected from various sources in Internet


If you are always trying to be normal you will never know how amazing you can be!!

Monday, 9 March 2020

DataOps

Way to DataOps

DataOps focuses on the end-to-end delivery of data. In the digital era, companies need to harness their data to derive competitive advantage. In addition, companies across all industries need to comply with new data privacy regulations. The need for DataOps can be summarised as follows:
  • More data is available than ever before
  • More users want access to more data in more combinations

DataOps

When development and operations don’t work in concert, it becomes hard to ship and maintain quality software at speed. This led to the need for DevOps. 
DataOps is similar to DevOps, but centered around the strategic use of data, as opposed to shipping software. DataOps is an automated process oriented methodology used by Big Data teams to improve the quality and reduce the cycle time of Data Analytics. It applies to the entire data life cycle from data preparation to reporting.
It includes automating different stages of the work flow including BI, Data Science and Analytics.DataOps speeds up the production of applications running on Big Data processing frameworks. 

Components 

DataOps include the following components:

  • Data Engineering
  • Data Integration
  • Data security
  • Data Quality

DevOps and DataOps

  • DevOps is the collaboration between Developers, Operations and QA Engineers across the entire Application Delivery pipeline, from Design and Coding, to Testing and Production Support. While DataOps is a Data Management method that emphasizes communication, collaboration, integration and automation of processes between Data Engineers, Data Scientists and other Data professionals.
  • DevOps mission is to enable Developers and Managers to handle modern web based Application development and deployment. DataOps enables data professionals to optimize modern web based data storage and analytics.
  • DevOps focuses on continuous delivery by leveraging on demand IT resources and by automating Testing and Deployment. While DataOps tries to bring the same improvements to  Data Analytics.

Steps to implement DataOps

The following are the 7 steps to implement DataOps:
  1. Add Data and logic tests
  2. Use a version control system 
  3. Branch and Merge Codebase
  4. Use multiple environments 
  5. Reuse and containerize
  6. Parameterize processing 
  7. Orchestrate data pipelines 

The above details are based on the learnings gathered from different Internet sources. 



Keep going, because you didn't come this far to come only this far.. 

Sunday, 1 March 2020

DevOps

DevOps is a Software Engineering practice that aims at unifying Software Development and Operation. As the name implies, it is a combination of Development and Operations. The main phases in each of these can be described as follows:
  • Dev: Plan - - > Create - - > Verify - - > Package
  • Ops: Release - - > Configure - - > Monitor

DevOps Culture 

Devops is often described as a culture. Hence it consists of different aspects such as:
  • Engineer EmpowermentIt gives engineers more responsibility over the typical application life cycle process starting from Development, Testing, Deployment, Monitoring and Be On Call.
  • Test Driven DevelopmentIt is the practice of writing Tests before writing code. This will help increase the quality of service and gives developers more confidence for faster and frequent code releases.
  • AutomationIt involves the concept of automating everything that can be automated. This includes Test Automation, Infrastructure Automation, Deployment Automation etc.
  • MonitoringThis is the process of building monitoring alerts and monitoring the applications.

Challenges in DevOps

Challenges DevOps Solution
Dev Challenges
Waiting time for Code Deployment Continuous Integration ensures there is quick deployment of code, faster testing and speedy feedback
Pressure of work on old code Since there is no waiting time to deploy the code, the developer focusses on building the current code
Ops Challenges
Difficult to maintain uptime of production environment Containerization or Virtualization ensures that there is a simulated environment created to run the software containers and also offer great reliability for application uptime
Tools to automate infrastructure management are not effective Configuration management helps to organize and execute configuration plans, consistently provision the system and proactively manage the infrastructure.
Number of servers to be monitored increases and hence it is difficult to diagnose the issues. Continuous monitoring and feedback system is established through DevOps. Thus effective administration is assured

Periodic Table of DevOps Tools



Popular DevOps Tools

Some of the most popular DevOps tools are:
  • Git: Git is an open source, distributed and the most popular software versioning system. It works on client server model. Code can be downloaded from main repository simultaneously by various clients or developers.
  • Maven: Maven is build automation tool. It automates software build process & dependencies resolution. A Maven project is configured using a project object model or pom.xml file.
  • Ansible: Ansible is an open source application which is used for automated software provisioning, configuration management and application deployment. Ansible helps in controlling an automated cluster environment consisting of many machines.
  • Puppet: Puppet is an open source software configuration management, automated provisioning tool. It is an alternative to Ansible and provides better control over client machines. Puppet comes up with GUI which makes it easy to use than Ansible.
  • Docker: Docker is a containerization technology. Containers consist of all the applications with all of its dependencies. These containers can be deployed on any machine without caring about underlying host details.
  • Jenkins: Jenkins is an open source automation server written in java. Jenkins is used in creating continuous delivery pipelines.
  • Nagios: Nagios is used for continuous monitoring of infrastructure. Nagios helps in monitoring server, application and network. It provides a GUI interface to check various details like memory utilisation, fan speed, routing tables of switches, or state of SQL server.
  • Selenium: This is an open source automation testing framework used for automating the testing of web applications. Selenium is not a single tool but a suite of tools. There are four components of Selenium – Selenium IDE, RC, WebDriver, and Grid. Selenium is used to repeatedly execute testcases for applications without manual intervention and generate reports.
  • Chef: Chef is a configuration management tool. Chef is used to manage configuration like creating or removing a user, adding SSH key to a user present on multiple nodes, installing or removing a service, etc.
  • Kubernetes: Kubernetes is an open source container orchestration tool. It is developed by Google. It is used in continuous deployment and auto scaling of container clusters. It increases fault tolerance, load balancing in a container cluster.

References 

https://xebialabs.com/periodic-table-of-devops-tools/

Can DevOps can be incorporated in Data Management?
More details regarding the same will be discussed in the next post.

Also Thank you to one of my former colleagues for inspiring me to write a post on this hot topic.



She woke up every morning with the option of being anyone she wished, how beautiful it was that she always choose herself... 

Saturday, 11 January 2020

Quantum Computing

Classical computers are composed of registers and memory. The collective contents of these are often referred to as state. Instructions for a classical computer acts based on this state. It consists of long string of bits, which encode either a zero or a one.
The world is running out of computing capacity. As Moore's law states, the number of transistors on a microprocessor continues to double every 18 months, hence it is high time for the introduction of atom sized microprocessors. So the next logical step will be to create quantum computers which will harness the power of atoms and molecules to perform the processing tasks.
The word quantum was derived from Latin, meaning 'how great' or 'how much'. The discovery that particles are discrete packets of energy with wave like properties led to the branch of Physics called Quantum Mechanics.
Quantum Computing is the usage of quantum mechanics to process information.

Qubits

The main advantage of quantum computers is that they aren't limited to two states since they use Qubits or Quantum Bits instead of bits.
Qubits represent atoms, ions, electrons or protons and their respective control devices that are working together to act as computer memory. Because a quantum computer can contain multiple states simultaneously, it has the potential to be millions of times more powerful than super computers.
Qubits are very hard to manipulate, any disturbance causes them to fall out of their quantum state. This is called decoherance. The field of quantum error correction examines the different ways to avoid decoherance.

Super position and Entanglement

Super position is the feature that frees up from binary constraints. It is the ability of a quantum system to be in multiple states at the same time.
Entanglement is an extremely strong correlation that exists between quantum particles. It is so strong that two or more quantum particles will be linked perfectly, even if separated by great distances.
A classical computer works with ones and zeroes, a quantum computer will have the advantage of using ones, zeroes and 'super positions of ones and zeroes' . This is why a quantum computer can process a vast number of calculations simultaneously. 

Error Correcting Codes

Quantum computers outperform classical computers on certain problems. But efforts to build them have been hampered by the fragility of qubits. This is because they are easily affected by heat and electro magnetic radiation. Error Correcting Codes are used for this. 

Quantum Computer 

Computation device that make use of quantum mechanical phenomena, such as Super position and Entanglement to perform operations on data is termed as Quantum Computer. Such a device is probabilistic rather than deterministic. So it returns multiple answers for a given problem and thereby indicates the confidence level of the computer. 

Applications of Quantum Computers 

1. They are great for solving optimization problems. 
2. They can easily crack encryption algorithms. 
3. Machine learning tasks such as NLP, image recognition etc

Quantum Programming Language(QPL) 

Quantum Programming Language is used to write programs for quantum computer. Quantum computer language is the most advanced implemented QPL. 
The basic built-in quantum data type in Quantum Computer Language is qreg(quantum register) which can be interpreted as an Array of qubits(quantum bits).

Interesting facts 

Quantum computers require extremely cold temperatures, as sub-atomic particles must be as close as possible to a stationary state to be measured. The cores of D-wave quantum computers operate at -460 degrees F or -273 degrees C which is 0.02 degrees away from absolute zero. 

Google's Quantum Supremacy

Quantum Supremacy or Quantum Eclipse is the way of demonstrating that a quantum computer is able to perform a calculation that is impossible for a classical one. 
Google's quantum computer called 'Sycamore' consist of only 54 qubits. It was able to complete a task called Random Circuit Problem. Google says Sycamore was able to find the answer in just a few minutes whereas it would take 10,000 years on the most powerful super computer. 

Amazon Braket

AWS announced New Quantum Computing Service (Amazon Braket) along with AWS Center for Quantum Computing and Amazon Quantum Solutions Lab at AWS re:Invent on 2 December, 2019. Amazon Braket is a fully managed service that helps to get started with quantum computing by providing a development environment to explore and design quantum algorithms, test them on quantum computers, and run them on quantum hardware.

References

Information collected from various sources on the Internet.


At times an impulse is required to trigger an action. Special Thanks to Amazon Braket for being an irresistible impulse to make me write this blog on Quantum Computing. 






Make everyday our Master Piece, 
Happy 2020!!


Saturday, 27 July 2019

Exploring Cassandra: Part- 1

Cassandra is an open source column family NoSQl database that is scalabale to handle massive volumes of data stored across commodity nodes. 

Why Cassandra?

Consider a scenario where we need to store large amounts of log data. Millions of log entries will be written everyday. It also requires a server with zero downtime.

Challenges with RDBMS


  • Cannot efficiently handle huge volumes of data
  • Difficult to serve users worldwide with the ceyralized single node model
  • Server with zero downtime

Using Cassandra


  • It is highly scalable and hence can handle large amounts of data
  • Most appropriate for write heavy work loads
  • Can handle millions of user requests per day
  • Can continue working even when nodes are down
  • Supports wide rows with a very flexible schema wherein all rows need not have the same number of columns

Cassandra Vs RDBMS



Cassandra Architecture


  • Cassandra follows a peer to peer master less architecture. So all the nodes in the cluster are considered equal.
  • Data is relicated on multiple nodes so as to ensure fault tolerance and high availability.
  • The node that recives client request is called the coordinator. The coordinator forwards the request to the appropriate node responsible for the given row key
  • Data Center: Collection of related nodes
  • Node: Place where the data is stored
  • Cluster: It contains one or more nodes

Applications of Cassandra


  • Suitable for high velocity data from sensors
  • Useful ot store time series data
  • Social media networking sites use Cassandra for analysis and recommendation of products to their customers
  • Preferred by companies providing messaging services for managing massive amounts of data



All good things are difficult to achieve; and bad things are very easy to get.