Summer training students' planet

April 11, 2019

Prashant Sharma (gutsytechster)

How to post on LinkedIn via its API using Python?

Hey folks! This time we are going to play with LinkedIn API. Well, we all know what APIs are used for. Don’t we? Of course we do. Though in case you don’t, just go through this article once and you’ll understand what APIs are.
Coming back to what the today’s topic is about ie to use LinkedIn API to post something on your LinkedIn profile via Python. I must say, LinkedIn has provided quite a detailed and helpful docs for this but sometimes, we want the examples to be used along with code. I am going to write a python script that would post on your LinkedIn profile using its API.

Getting Access Token

Before we start writing code, first we need to do some preparations. Starting with getting access token from LinkedIn which allow us to use its API as an authenticated user. For that we need to go here and create an app. When you’ll start filling up the form for creating the app, you might get confuse as to what should you fill up in the company’s field. Since we are using it for testing purpose, you may either create a company or select a random company from the available choices. The choices would start showing up as soon as you start filling up that field. I selected a company named Test Company and guess what, that actually didn’t have any info. So I think, devs  must have created that page for testing purpose 😛. So, it got me saved.

As soon as the app is created, you can go to the My apps option that would be available on your profile and find your newly created app there. Get into the app and click on Auth option and search for Permissions field. You would see that there is no permission yet and for sharing on LinkedIn via its API we need to have the permissions. But don’t you worry, it actually takes some time about a day or so, to review your app and grant you the permissions.

Once the permissions are granted, follow this guide to get the access token. If you stuck anywhere in between, don’t hesitate to ask in the comments below.

Writing Python Script

Since, you have acquired the access token we can now use the API as an authenticated user. We’ll be using it to create valid authenticated requests. Now create a directory anywhere in your system and name it as linkedin-post. Also make sure to create a virtual environment so that its dependencies doesn’t interfere with your other projects.

After creating virtual environment, install requests module of python using pip

pip install requests

It will install requests along with some of its dependencies. We’ll be using it to create GET or POST requests to the API. Now then create a file post_on_linkedin.py and start writing the following

import requests
import os

Apart from requests module, I have also imported the os module. We’ll know why so, in a few minutes. Keep reading for now. Let’s write few more lines

import requests
import os

access_token = "<your access token here>"

We have assigned the access token generated earlier to a variable so that using it becomes easier. In case we need to change it or use it at different places, using a variable would be much easier. But there is a problem here. Can you guess what could that be? So the thing is that many times we upload our code to a code hosting service like GitHub, Bitbucket, Gitlab etc. So keeping such confidential credentials inside of code would be risky. It won’t be a good approach.

To resolve this, we use something known as environment variables. We define our private credentials in a file called .env in the form of key value pair and then use them as a variable in our source file. But we make sure that we don’t push the .env file to the code hosting services. To read environment variables, python has an awesome module called python-dotenv. Let’s go ahead and install it using pip as earlier

pip install python-dotenv

Now create a .env file in the same directory as the source file and write the following content to it.

ACCESS_TOKEN="<your access token here>"

Here the environment variable is ACCESS_TOKEN and its value would be the actual token you assign to it. To use this environment variable, we’ll need to make few changes to our source file

import requests
import os

from os.path import join, dirname
from dotenv import load_dotenv

# Create .env file path
dotenv_path = join(dirname(__file__), '.env')

# load file from the path
load_dotenv(dotenv_path)

# accessing environment variable
access_token = os.getenv('ACCESS_TOKEN')

We have imported a few more modules and used them to set the path for .env file and to load it. Once the file is loaded, using an environment variable is as simple as calling the function os.getenv() with the key, the value is assigned to. In our case, ACCESS_TOKEN is the one to be used here.

Now, let’s proceed further and add one more line

...
api_url_base = 'https://api.linkedin.com/v2/'

We have defined another variable with the api’s base url to start off every url with LinkedIn API. We can append to it as needed.

For sharing on LinkedIn, the request will always be a POST request to the api endpoint defined here along with the post data. If you would notice, the first parameter to be sent along with request is author and its value is Person URN. To retrieve the Person URN, we send a GET request to the endpoint defined here. The ID parameter provided by the response from this GET request is the Person URN. Since this ID value is also a private credential, we’ll keep it in .env file and use it through environment variable.

#.env file

ACCESS_TOKEN="<your access token here>"
URN="<your Person URN here>
# post_on_linkedin.py

...
access_token = os.getenv('ACCESS_TOKEN')
urn = os.getenv('URN')
author = f"urn:li:person:{urn}"

The URN is used to define the author parameter. We have used f-strings to substitute the value of urn in the author string. Apart from the post data that is to be sent along with the POST request, we also have to define headers.

...
headers = {'X-Restli-Protocol-Version': '2.0.0',
           'Content-Type': 'application/json',
           'Authorization': f'Bearer {access_token}'}

You may have noticed that we have used the access_token in the Authorization header and this is how the API authenticates us. We have to send these headers with every request when we want to share on LinkedIn.

Great Work till now! Just a few more lines of code and we’ll be able to post on LinkedIn using a simple python script. Let get ahead then.

Now we’ll be defining a function post_on_linkedin(you can name it anything you want) and write the following into it.

...
def post_on_linkedin():
    api_url = f'{api_url_base}ugcPosts'

    post_data = {
        "author": author,
        "lifecycleState": "PUBLISHED",
        "specificContent": {
            "com.linkedin.ugc.ShareContent": {
                "shareCommentary": {
                    "text": "This is an automated share by a python script"
                },
                "shareMediaCategory": "NONE"
            },
        },
        "visibility": {
            "com.linkedin.ugc.MemberNetworkVisibility": "CONNECTIONS"
        },
    }

    response = requests.post(api_url, headers=headers, json=post_data)

    if response.status_code == 201:
        print("Success")
        print(response.content)
    else:
        print(response.content)

Let’s understand this piece of code in few points:

  1. We have defined the api_url. As I mentioned earlier all the request to share on LinkedIn has to be sent to the api endpoint defined here. So we added the ugcPosts to the api_url_base to get the defined endpoint.
  2. We have defined the post_data that has to be sent with the request in the form of python dictionary that closely resembles the JSON format by keeping every key and value within strings. All the necessary parameters are defined along with the values as is notified here.
  3. We have sent the POST request to the api_url with the defined headers and post_data. json parameter takes care of encoding the post_data as JSON.
    We then check if the response.status_code is 201 which signifies the successful execution of request and then we print out response’s content.

That’s it!  We have successfully written a python script that can post on LinkedIn using its API. But you know what? The code still won’t work. Can you guess why? It’s simple, we haven’t called the function yet 😛. What are you waiting for? Just call the function outside its block.

...
post_on_linkedin()

Hurrah! Now go to your terminal and run this python script. I am sure it will work. All the code of this tutorial is hosted here. You can check it for reference. It was fun working with API and doing some amazing stuff. I hope, it was helpful for you. If you find any mistake or want to give any suggestion, feel free to write in the comment section below

References

  1. Share on LinkedIn

  2. https://www.digitalocean.com/community/tutorials/how-to-use-web-apis-in-python-3
  3. https://robinislam.me/blog/reading-environment-variables-in-python/

Meet you in next the post. Till then be curious and keep learning!

Advertisements

by gutsytechster at April 11, 2019 03:43 PM

April 10, 2019

Piyush Aggarwal (brute4s99)

the git flow

INTRODUCTION

A common question for anyone stepping foot into the world of FOSS contributions is- How to start? This post aims to be the post I wish I had read an year ago when I started my journey.

BABY STEPS

The commonly known work flow for git is as follows:-

  1. Make a fork of the target project.
  2. Clone the fork to your local dev machine.
  3. Set a remote for upstream.

Contact the project team, introduce yourself and ask questions related to the project. Read more about this here. This is super important!

  1. Test the project yourself.

  2. Look for any issues or bugs/ something to fix in the project.

  3. This step should be performed everytime when you are about to make a new branch, or want to update the master branch of your fork. Perform the following two commands to update master branch of the fork:-

      git fetch upstream
      git merge upstram/master
      *resolve the conflicts, if any*

It’s always a good idea to make a different (feature) branch for every feature/ issue/ bug you work on. While this keeps all your diverse efforts in one single folder, it maintains them completely separate from each other. This way, you don’t have to worry about anything but just the branch name! Keep memorable and simple branch names.

  1. Make a new branch titled something relevant to the thing you wish to fix, say XYZ.

  2. Make the fix, push it to origin (i.e your fork) remote, as the XYZ feature branch.

  3. Make a Pull/Merge Request.

      1. Wait for review.
      2. Make necessary fixes.
      3. Repeat from step 8.1 while not approved by every reviewer.

Most probably, you will now be asked to rebase your branch. It just means to perform a couple of commands that will replay all your commits on the new, latest stuff from upstream/master.

  1. Perform the following two commands to rebase your XYZ feature branch:-

      git fetch upstram
      git rebase upstream/master
      *resolve the conflicts, if any*

The Stash

Many times you might want to start working on some other feature right away! In such cases usually you would have some uncommitted files in your current branch.

This usually happens in the face of release date deadlines. The git-stash can save your uncommitted changes on a stack like data structure. This is done by the command:-

      git stash
      *your current working branch will be clean now, i.e there will be no uncommitted changes left*

Don’t worry! They are in the stash, safe and sound. You can save multiple sets of uncommitted changes in the stash by using git stash every now and then. To see the list of all such sets of uncommitted changes, use the following command:-

      git stash list

Just perform this command to get the most recently stashed changes back in your current working branch:-

      git stash apply
      *yes, you have the freedom to use `git stash` at one branch, and then checkout another branch and do `git stash apply`. It will work.

If you wish to retrieve any other set, refer to the index of stash, for example:-

stash@{0}: WIP on telephony_unknown: 7df58d0d SVN_SILENT made messages (.desktop file) - always resolve ours
stash@{1}: WIP on master: 7df58d0d SVN_SILENT made messages (.desktop file) - always resolve ours
stash@{2}: WIP on timestamp: 9ec0d04f SVN_SILENT made messages (.desktop file) - always resolve ours
(END)
      git stash apply stash@\{1\}
      *this will apply the set of changes in index 1 i.e stash@{1} to the current working branch

CONCLUSION

This should be enough to get you going with the adventures of git.

Git is focussed on freedom by design. You can do a lot of stuff, and you can also undo it as you go, so don’t fret to play with this empowering tool!

signing off now; later! :)

Stay safe and make the internet a healthier place!

April 10, 2019 07:31 PM

April 02, 2019

Bhavin Gandhi

Using Gadgetbridge and openScale Amazfit Bip

Around 6 months ago the wrist watch that I was using from last 11 years broke. It was not possible to get it repaired as company does not manufacture any parts of it now. I was looking for an alternative but didn’t like any other normal watches available. So I decided to buy Amazfit Bit by Huami. Huami is brand by Xiaomi. While I’m not really interested in the steps count, sleep count, I liked the design of the watch.

by @_bhavin192 (Bhavin Gandhi) at April 02, 2019 02:23 PM

March 23, 2019

Bhavin Gandhi

infracloud.io: HA + Scalable Prometheus with Thanos

This is another blog post I wrote. It is about a tool called Thanos which can be used to setup highly available Prometheus. It was published at infracloud.io on 8th December, 2018. HA + Scalable Prometheus with Thanos

by @_bhavin192 (Bhavin Gandhi) at March 23, 2019 09:25 AM

March 22, 2019

Bhavin Gandhi

infracloud.io: Kubernetes Autoscaling with Custom Metrics

I wrote a blog post about scaling workloads in Kubernetes based on the metrics generated by applications. It was published at infracloud.io on 20th November, 2018. Kubernetes Autoscaling with Custom Metrics

by @_bhavin192 (Bhavin Gandhi) at March 22, 2019 06:18 PM

March 21, 2019

Prashant Sharma (gutsytechster)

YAML 101

Writing configuration files has become much easier as we had come across YAML. YAML is a recursive acronym stands for YAML Ain’t Markup Language. But guess what? Initially it was said to mean Yet Another Markup Language but then it was repurposed as to become data oriented rather than being document markup. In short, YAML is a human readable data serialization language. Though it can be used in many applications where data is being stored or transmitted, it is commonly used for writing configuration files. Many software tools like Travis CI and Docker uses YAML structure to define its configurations.

It is also said to be a super set of JSON syntax ie every JSON document is a valid YAML document as well. Apart from that it also contains some features that lacks in JSON which we’ll be seeing in a few minutes. That’s what makes it so awesome.

YAML uses .yaml as its official extension, though many documents also use .yml. For a short answer as to why these two extension exist, please refer here. Well then. let’s start understanding its basics and write some configuration files for ourselves.

Structure

A YAML file consist of mainly map objects just like dictionary in python or hashes in other languages ie a key-value pair generally defined as following:

---
key: value
...

A key followed by a colon and space, then a value associated with it. Apart from the key value pair, I’ve used the dashes above its definition. Three dashes represent the starting of the YAML file or to be more specific, it separates directives from content. Also there are few dots below the key-value pair. Three dots represent the ending of the YAML file.

Keys/Values

A key in YAML can be of different types like a string, a scalar value, a floating point number etc. Also string don’t need to be quoted using single or double quotation marks. However, they may for the purpose of escaping some characters.

Same goes for values. They can also be of any data type. Apart from defined for keys, a value can be of boolean type and null as well. E.g.

---
'key with quotation marks': 'value in quotation marks'
23: "An integer key"
'a boolean value': true
key with spaces: 3
a null value: null

All above examples are valid map objects as per YAML syntax.

Nesting

Nesting in YAML can be implemented using identation. An identation in YAML is given by giving two or more spaces at the beginning. YAML is very strict with its identation. For e.g.

---
a_nested_object:
  key: value
  another_key: another_value

One more thing, YAML uses only spaces and not tabs.

Sequence

A sequence or list can also be defined in YAML using `-`(dash) as a list marker. For e.g.

---
- item1
- item2
- nested_item:
    - nested_item1
    - nested_item2

Note the space after each list marker.

Multi-line Strings

Multi line strings in YAML can be written either as a ‘literal block’ or a ‘folded block’. The difference between these two is that ‘literal block’ preserves the new lines while ‘folded block’ folds the new line. Literal block uses (|) pipe character whereas folded block uses ‘>’ symbol. Consider an example.

data: |
   There once was a tall man from Ealing
   Who got on a bus to Darjeeling
       It said on the door
       "Please don't sit on the floor"
   So he carefully sat on the ceiling
data: >
   Wrapped text
   will be folded
   into a single
   paragraph

   Blank lines denote
   paragraph breaks

Folded block converts new line to spaces and removes leading whitespaces.

Inline Mapping

YAML being a super-set of JSON allows inline key value pair enclosed in curly braces. For e.g.

name: Prashant Sharma
age: 18

can be written as

# Inline format
{name: Prashant Sharama, age: 18}

Though unlike JSON, keys or values don’t necessarily needs to be quoted. You might have noticed that I have used a comment in above example. A comment can be written by prefixing it using a ‘#’. Same fashion can also be seen for sequences. For e.g.

[milk, pumpkin pie, eggs, juice]

is a valid sequence in YAML. And you know what? Sequences can also be used as a key  or values in YAML syntax.

- {name: Prashant, age: 18}
- name: Shiva
  age: 20
- [name, age]: [Neeraj, 14]

Complex Keys

Just as above in multi-line values, keys can also be complicated in some cases i.e. it can also span multiple lines or can be an indented sequence. To denote a complex key, we use a ‘?’ followed by a space. For e.g.

? |
  This is a key
  that has multiple lines
: and this is its value
? - Prashant Sharma
  - Shiva Saxena
: [1998-11-16, 1997-10-07]

These were some amazing features. Weren’t they? But wait, it has got more in its pocket. Didn’t I tell you earlier, it’s so awesome. Now then let’s explore it a bit more.

Extra Features

  • Anchors

Anchors in YAML allow us to easily duplicate content across our document and then we can use it anywhere throughout the document using references. Anchors are defined as prefixing an ampersand(&) with the anchor name and can be referred by using an asterisk(*) along with the anchor name. For e.g.

anchored_content: &anchor_name This string will appear as the value of two keys.
other_anchor: *anchor_name
  • Merge

Merge symbol(<<) in YAML works along with anchors so that the objects can be inherited. Consider an example for this

- step: &id001                  # defines anchor label &id001
    instrument:      Lasik 2000
    pulseEnergy:     5.4
    pulseDuration:   12
    repetition:      1000
    spotSize:        1mm
- step:
    <<: *id001
    spotSize: 2mm                # redefines just this key, refers rest from &id001

As you may see, we have merged the content by referring the anchor with another key-value pair.

  • Data Typing

We seldom see explicit data typing in YAML files as YAML itself is capable of detecting simple types like integer, string etc. Data types can be explicitly changed by using “!!” symbol followed by the data type name. In YAML, data types can be categorized as core, defined and user-defined.

  • Core data types are those which are usually implemented by all parsers (e.g. integer, string etc.).
  • There are some advanced data types which has been defined in YAML specification but not implemented in every parser such as binary data, comes under defined category.
  • Apart from that YAML also allow us to define user-defined classes, structure or types.

We’ll take a look at all of them with a few examples:

---
a: 540                     # an integer
b: "540"                   # a string, disambiguated by quotes
c: 540.0                   # a float
d: !!float 123             # also a float via explicit data type prefixed by (!!)
e: !!str 123               # a string, disambiguated by explicit type
f: !!str true              # a string via explicit type

picture: !!binary |        # a binary data type
  R0lGODdhDQAIAIAAAAAAANn
  Z2SwAAAAADQAIAAACF4SDGQ
  ar3xxbJ9p0qa7R0YxwzaFME
  1IAADs=

myObject: !myClass { name: Prashant, age: 18 }

YAML also has a set data type. A set data type is nothing but a map object with null values. You could say that it’s a collection of keys only. They can be defined as

a_set:
  ? key1
  ? key2
  ? key3

or: {key1, key2, key3}

References

  1. https://learnxinyminutes.com/docs/yaml/
  2. https://en.wikipedia.org/wiki/YAML
  3. https://yaml.org/

Finally, we have reached the conclusion of this blog post. This was a quite long post, but I guarantee, you won’t need to go elsewhere again if you are working with YAML. Though if you find any mistake or suggestion, do tell through the comment section below. I’ll be glad to hear them. Well then, bid you a happy goodbye. Meet you next time

Till then, be curious and keep learning!

by gutsytechster at March 21, 2019 03:32 PM

Shiva Saxena (shiva)

Hardwork v/s Smartwork

Hi all! When we explain someone about productivity, these 2 words hard-work and smart-work inevitably come into consideration. Did you ever think about it? I mean, what “smart-work” really is? How is it different from “hard work”? Keep the answer in your mind and keep reading.

This complete post is based on a conversation happened with CuriousLearner. I am thankful to him for explaining the real meaning of smart-work.

This post is going to be all about questioning and one of a thought-provoking kind.

Nowadays, if we ask someone that what do you prefer between hard-work and smart-work, the most probably answer we’ll get is smart-work. Then, we may ask him in counter that, what do you think a smart-work really is? or rather ask that person directly, what is smart-work?

What is smart-work?

Going over the web, we may find out different people giving a different definition of this term. For example:

  • Smart work is that you do any work with lesser efforts

  • Accurately, performed work within a short period of time that’s called smart work.

So, do you have your definition with you? Let’s see how correct you know about this term.

You must be knowing about Venn diagrams. Suppose hard-work is a circle (first entity) and smart-work is another one (second entity). What do you think? Do hard-work and smart-work are different?

Think about it for a minute or two and keep the answer with you.

Case 1: Yes they are different

If your answer is yes, then you are saying that these 2 entities are different and the intersection of their Venn diagram is empty. That implies:

  • hard-work  –  smart-work  ==  empty
  • hard-work  –  smart-work  ==  empty

But is it really? Don’t you think there is some similarity among both of them?

Case 2: Somewhere they are the same

If this is your answer, then you are saying that the intersection of hard-work and smart-work is not empty. That implies:

  • hard-work  –  smart-work   ==  something
  • smart-work  –  hard-work  ==  something

If you think this is correct, then please tell me what is that <something>? 🙂

Think about it.

Are you able to find out that <something>? If yes, then you are smart, please feel free to comment your solution in the comments section below. I would love to read it. But if no, then why did you chose that somewhere they are the same? 🙂

Anyhow, what about the least responsive answer as follows.

Case 3: Both overlap each other

Now this is really not a good answer, because this implies that:

  • hard-work  ==  smart-work

Which is surely not possible because no matter what, but at least they are not the same. So, what are they?

Case 4: One is a subset of other

Really? If yes, then please think about who is a subset of what?

  • hard-work is a subset of smart-work? or
  • smart-work is a subset of hard-work?

Think.

By the time, you need to ponder upon one more important thing. Before reading the post did you really know what smart-work is? If no, then it was like you were trying to do something in your daily life that you did not even know.

And if someone doesn’t know what a smart-work is, then how can s/he claim of practicing it. Isn’t it?

Let’s get to the answer now.

Case 5: Smart-work is a subset of Hard-work

They are not different, yet they are not the same either. Smart-work is actually a subset of hard-work. That implies:

  • hard-work  –  smart-work  ==  something
  • smart-work  –  hard-work  ==  empty

Here, this <something> is all we have known as smart-work. So, what actually is this <something>?

The Smart-work

Smart-work is that part of hard-work in which we plan the process to execute in order to accomplish the goal. As simple as that. A better plan, better execution, thus, better results!

Smart-work never meant to reduce the efforts to accomplish a goal, it always meant to get more efficient and better results with the same effort. Because effort should not be compromised. 🙂

Smart-work is not a shortcut to accomplish the goal, rather it is the way to get accomplishment to be more fruitful.

It is like a vector who give a direction to a quantity. The best example is pushing a  brick.

  • 1D: Pushing a brick against a wall won’t give any result even though you are making efforts. But with the same effort if you push the same brick in the opposite direction, then it will move and you’ll get some work done.
  • 2D: Now condition becomes more complex, now you have 2 axes of motion, you would need to plan, in which direction your goal is present so that so you may push the brick in the correct path.
  • 3D: As we’ll keep adding conditions to a task out of many to achieve the goal, we’ll find out that we need to think more and more to get the correct plan.

In real life to achieve a goal there are numerous possibilities, you need to analyze, research, take feedback, then repeat. Once you are done and your plan is ready to go, your smart-work is already done. Now, what left is hard-work and efforts.

Person A and B did the same job but A got better results than B. Who do you think was smart-worker? Yes,  A. Because s/he well planed his/her tasks and became a smart-worker.

Conclusion

If before reading the post you didn’t know what smart-work is? Then perhaps you had an assumption about the subject. But being clear is the best than remain in an assumption. Make sure that you know about the thing that you claim to be doing. 🙂

Thanks for reading!

See you in the post!

Advertisements

by Shiva Saxena at March 21, 2019 12:42 PM

March 17, 2019

Kuntal Majumder (hellozee)

Getting Alight

Would start this one with a quote, Information is free - You have to know People are not - You have to pay Contributors are priceless - You have to be

March 17, 2019 07:31 PM

Bhavin Gandhi

Creating presentations with Org mode

As I said in the last blog post about Emacs, I use Org mode a lot. It’s the default mode I use for taking notes, tracking tasks etc. The export interface it provides is a really useful feature. There are a lot of packages, which provide ways to export the Org mode file to other formats like Markdown in addition to default supported formats HTML, PDF, text etc. Presenting within Emacs A few months ago I had a talk on Autoscaling in Kubernetes within our company.

by @_bhavin192 (Bhavin Gandhi) at March 17, 2019 02:09 PM

March 16, 2019

Prashant Sharma (gutsytechster)

How to install Docker on Linux Mint Debian Edition(LMDE)

There are very probable chances that you might come across using Docker in your tech lifetime. This time it was I learning this. But before anything, you’d need to install Docker on your machine. And this is the part where I got stuck. So I thought writing a small post for other people who might face the same thing would help.

Like every other person, I had gone to the official documentation looking for installation procedure. However, there were different procedures for installing it and those were based upon your Linux distribution. But I found no specific option for Linux Mint. There were generic options like CentOS, Debian, Ubuntu, and Fedora. So, I wasn’t sure if I should go with the installation procedure for Debian based distro or if there is something else that has to be followed.

After getting help from various people and searching on the web, I successfully installed the docker on my LMDE machine. So let’s get along with some steps

  1. Firstly check if the package docker.io is installed on your system. You can check it using the command
    $ aptitude search docker.io

    If it shows the package then a regular installation of this package would install docker on your machine. You can proceed as

    $ sudo apt-get install docker.io
  2. If the package docker.io is not present on your system, then we’ll go with procedure defined for Debian. However, with a few changes as because we are using LMDE and not Debian itself.

    Since, we are installing the docker for the first time, we need to set up the Docker repository so that we can install and update it from these repositories. So for that run the following commands

    Update the apt package index

    sudo apt-get update

    Install packages to allow apt to use repository over HTTPS

    $ sudo apt-get install \
        apt-transport-https \
        ca-certificates \
        curl \
        gnupg2 \
        software-properties-common

    Now add Docker’s official GPG key

    $ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -

    Now, we will set up the repository for Debian stable release. To check on which Debian base your LMDE is set up, use the following command

    $ cat /etc/os-release

    It would produce the output something similar to

    PRETTY_NAME="LMDE 3 (cindy)"
    NAME="LMDE"
    VERSION_ID="3"
    VERSION="3 (cindy)"
    ID=linuxmint
    ID_LIKE=debian
    HOME_URL="https://www.linuxmint.com/"
    SUPPORT_URL="https://forums.linuxmint.com/"
    BUG_REPORT_URL="http://linuxmint-troubleshooting-guide.readthedocs.io/en/latest/"
    PRIVACY_POLICY_URL="https://www.linuxmint.com/"
    VERSION_CODENAME=cindy
    DEBIAN_CODENAME=stretch

    Here what we have to look for is present at the last of the generated output ie DEBIAN_CODENAME which is stretch in our case. Hence we’ll be using the following command to set up the repository

    sudo add-apt-repository \
       "deb [arch=amd64] https://download.docker.com/linux/debian \
       $ stretch \
       stable"

    As soon as you press enter, the repository will be added in your system. This is the part where I have put a change from the official documentation. In official documentation, it takes the release name for the Debian through the command

    $ lsb_release -cs

    directly. However it would give the code-name for our LMDE release ie cindy in my case. So to avoid this, we manually gave the release name.

    Now, then we have added the repository. Let’s update our apt index packages once more.

    $ sudo apt-get update

    Great! Let’s go ahead with Docker installation using

    $ sudo apt-get install docker-ce docker-ce-cli containerd.io

    Hurrah! we have successfully installed Docker on our LMDE machine and now can use it.

You can check the Docker version you have gotten on your system. Thereby ensure the installation of Docker also.

$ docker --version

Docker version 18.09.3, build 774a1f4

Now let’s run another command

$ docker info

Error!

What? Did it give you an error? Something like this

docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.35/containers/create: dial unix /var/run/docker.sock: connect: permission denied.

I thought so!  But don’t you worry, it’s just a permission issue and we can deal with it. We need to add the user to the docker group. We can do it using the following command

sudo usermod -a -G docker $USER

And then try if it works. If not, then restart the system and try again. Guess what? It would now show the info about the Docker installed on your system.

That’s it. We have successfully installed Docker and it’s ready to use. Go ahead!

References

  1. https://docs.docker.com/install/linux/docker-ce/debian/
  2. https://docs.docker.com/get-started/

Now then, bidding you goodbye. Meet you next time.

Till then, be curious and keep learning!

by gutsytechster at March 16, 2019 03:21 PM

March 12, 2019

Prashant Sharma (gutsytechster)

Chasing JSON-LD – Part II

JavaScript Object Notation for Linked Data popularly known as JSON-LD is a lightweight syntax to inject Linked Data into JSON so that it can be widely used in web applications and can be parsed by JSON storage engines.

This post is in continuation of the previous post which describes the basics of JSON-LD and will contain more of its features and concepts. I’ll recommend you to go through that for an easy understanding. Well then, let’s get started.

It contains a variety of features that are really helpful for someone working with it. Some of them are described below

  • Versioning

Since JSON-LD has two major version as 1.0 and 1.1, you can define which version to be used for processing your JSON-LD document as per your use case. It can be done by defining the @version key in your @context. For eg.

{
  "@context": {
    "@version": 1.1,
    ...
  },
  ...
}

The first context which defines the @version tells which version should be used for processing your JSON-LD document, unless it is defined explicitly.

  • Default Vocabulary

Very often, many properties and types come from same vocabulary eg schema.org is widely used vocabulary for defining semantics for various terms. JSON-LD’s @vocab keyword provides us the feature to set a common prefix for all the properties and types that do not resolve to any IRI’s. For eg.

{
    "@context": {
      "@vocab": "http://schema.org/"
    },
    "@id": "http://example.org/places#BrewEats",
    "@type": "Restaurant",
    "name": "Brew Eats"   
}

The words Restaurant and name doesn’t resolve to any IRI, hence they would use @vocab’s IRI as prefix. However, there may arise a case in which you wouldn’t want a term to expand using the @vocab’s IRI. For that, the terms should be set to null explicitly. For eg.

{
    "@context": {
       "@vocab": "http://schema.org/",
       "databaseId": null
    },
    "@id": "http://example.org/places#BrewEats",
    "@type": "Restaurant",
    "name": "Brew Eats",
    "databaseId": "23987520"
}

Here, the key databaseID would not resolve to any IRI.

  • Aliasing Keywords

JSON-LD provides a way to give aliases to JSON-LD keywords except for the @context. This feature allows the legacy JSON code to be utilized by JSON-LD by re-using the JSON keys that already exist in the code. But a keyword can’t be aliased to another keyword. Consider an example for this

{
  "@context": {
    "id": "@id",
    "type": "@type",
  },
  "id": "http://example.com/about#gutsytechster",
  "type": "http://xmlns.com/foaf/0.1/Person",
}

Here, the @id and @type keywords has been aliased to id and type respectively and used accordingly.

  • Internationalization

Sometimes we require to annotate a piece of text in certain language. JSON-LD provides the @language keyword to use this feature. For a global language setting, the @language keyword can be defined under @context. For eg.

{
  "@context": {    
     "@language": "ja"
  },
  "name": "花澄",
  "occupation": "科学者"
}

You can also override default values using the expanded term definition as

{
  "@context": {    
     "@language": "ja"
  },
  "name": "花澄",
  "occupation": {
    "@value": "Scientist",
    "@language": "en"
  }
}

I liked this feature the most. It’s just amazing. 🙂

  • Embedding and Referencing

JSON-LD provides a way to use a node object as a property value. What? You ask me what a node object is. Well, a node object is a piece of information that can be uniquely identified within a document and lies outside the JSON-LD context. Let’s consider an example to understand this

[{
    "@context": {
      "@vocab": "http://schema.org/",
      "knows": {"@type": "@id"}
    },
    "name": "Shiva Saxena",
    "@type": "Person",
    "knows": "http://foaf.me/gutsytechster#me"
  }, 
  {
    "@id": "http://foaf.me/gutsytechster#me",
    "@type": "Person",
    "name": "Prashant Sharma"
  }
]

Here the  two node object are defined, one for Shiva Saxena and Other for Prashant Sharma. These two are separated through a comma having properties of its own. Here the node objects are linked through referencing using the knows property. knows property refer to the identifier of the another node object ie Prashant in this case.

Two node objects can also be linked through embedding by using the node objects as property values. It is commonly used to create the parent-child relationship between two nodes. For eg.

{
  "@context": {
    "@vocab": "http://schema.org/"
  },
  "name": "Shiva Saxena",
  "knows": {
    "@id": "http://foaf.me/gutsytechster#me",
    "@type": "Person",
    "name": "Prashant Sharma"
  }
}

Here note that type-coercion for knows property is not required as its value is not a string.

  • Expansion

Expansion in terms of JSON-LD is the process of taking a JSON-LD document and convert into a document in which no @context is required by expanding all the IRIs, types and values defined in @context itself. For eg

{
   "@context": {
      "name": "http://xmlns.com/foaf/0.1/name",
      "homepage": {
        "@id": "http://xmlns.com/foaf/0.1/homepage",
        "@type": "@id"
      }
   },
   "name": "Prashant Sharma",
   "homepage": "https://gutsytechster.wordpress.com/"
}

After expanding, it looks something like this

[
  {
    "http://xmlns.com/foaf/0.1/homepage": [
      {
        "@id": "https://gutsytechster.wordpress.com/"
      }
    ],
    "http://xmlns.com/foaf/0.1/name": [
      {
        "@value": "Prashant Sharma"
      }
    ]
  }
]

And I actually didn’t write that expanded form myself. There is a JSON-LD playground here where you can actually check if it is wrong or right!

  • Compaction

Now, can you guess what compaction might be? Well, it’s just opposite of Expansion. It is a process to apply the context to an already expanded JSON-LD document which results in shortend IRIs, terms or compact IRIs. Taking the same expanded document from above and applying the following context to it.

{
  "@context": {
    "name": "http://xmlns.com/foaf/0.1/name",
    "homepage": {
      "@id": "http://xmlns.com/foaf/0.1/homepage",
      "@type": "@id"
    }
  }
}

We will get our original JSON-LD documents back in the same form. I’ll ask you to try yourself in the JSON-LD playground.

But are you wondering, what’s the need for these Expansion and Compaction algorithms. The answer is pretty simple. Machine understands IRI’s to work with. So it expands the JSON-LD document for itself so that it can process the document and then compact it for developers to return in the same format as it was given.

I guess, we have explored quite a bit about JSON-LD. But it’s still doesn’t contain in-depth use case for each of these features. There are many other features which are available. I leave the rest on your curiosity.

References and Further Reading

  1. https://json-ld.org/spec/latest/json-ld/
  2. https://blog.codeship.com/json-ld-building-meaningful-data-apis/
  3. JSON-LD: Compaction and Expansion

  4. JSON-LD: Core Markup

Well, then meet you in next blog post. Till then,

be curious and keep learning!

by gutsytechster at March 12, 2019 12:42 PM

March 10, 2019

Prashant Sharma (gutsytechster)

Chasing JSON-LD – Part I

Well you might already be aware of this term. But if in any case, it’s a NO then you are at right place my friend. I just started out with it and am already amazed with its working and concept. Let’s start without waiting anymore.

JSON-LD

Starting with its expanded form, it stands for JavaScript Object Notation for Linked Data. Many of you might already be familiar with JSON and that’s something simple. It’s the most often used data format across web for exchanging data. It simplifies the data in the form of key-value pair that is both human-readable as well as parsable by a machine. But how is it relevant at all? Well because that’s the main foundation and inspiration behind the emergence of JSON-LD.
But what’s this Linked Data all about? For a bit detailed description you might want to refer to this post. However, if I put it in simple words then Linked Data is the data that is linked across the web through a semantic meaning. It allows an application to start at one piece of Linked Data and can go through other pieces of it hosted on various different sites on the web. And that’s where JSON-LD enters.

JSON-LD is a light-weight syntax to express Linked Data into JSON format. It’s primary objective is to use Linked Data across web-based services and to store it in JSON-based storage engines.

In other words, it injects meaning into the already available JSON data. But what’s the requirement of it. To understand this, let’s take an example of a simple JSON

{
  "name": "Prashant Sharma",
  "homepage": "https://gutsytechster.wordpress.com/",
  "image": "https://gutsytechster.wordpress.com/images/gutsy.png"
}

It’s a simple example representing few keys and values which are self explained. But machines can’t understand it. It doesn’t know what name is. Either it has to look up to some documentation where its meaning is defined or we have to inject it manually in the code processing this JSON. But just think how better could it be if the meaning could already be present to each term in this document itself. Well, this is possible with the help of Internationalized Resource Identifier – IRIs( an extended version of URIs). We can use the popular schema.org vocabulary to define these terms. So in JSON-LD format, it can be translated as

{
  "http://schema.org/name": "Prashant Sharma",
  "http://schema.org/url": { "@id": "https://gutsytechster.wordpress.com/" },
  "http://schema.org/image": { "@id": "https://gutsytechster.wordpress.com/images/gutsy.png" }
}

For now, don’t focus on @id part. Just see how we have defined each key in terms of IRIs. However, even though its a valid JSON-LD document that is very specific about its data but its too verbose and would be difficult for a developer to work with. What we want, is to be specific as well as concise at the same time. To address this issue, JSON-LD describes the notion of @context.

  • Context

During a communication with one another, the whole conversation takes place in a shared medium – generally called as the “context of the conversation”. A context allow us to use short form without loosing its actual meaning. @context in JSON-LD works the same way. It allow us to map terms to IRIs so that they can be used throughout the document without loosing its actual meaning For eg.

{
  "@context": {
    "name": "http://schema.org/name",
    "image": {
      "@id": "http://schema.org/image", 
      "@type": "@id"
    },
    "homepage": {
      "@id": "http://schema.org/url", 
      "@type": "@id" 
    }
  },
  "name": "Prashant Sharma",
  "homepage": "https://gutsytechster.wordpress.com/",
  "image": "https://gutsytechster.wordpress.com/images/gutsy.png"
}

In above example, we defined the IRI for each terms in the @context and then use them directly throughout the JSON document. The referencing of image and homepage key would be clear to you in a few minutes. Just keep reading 🙂

  • Global Identifiers

Identifiers helps to uniquely identify a piece of information within a document. JSON-LD uses @id to identify such information values. It’s value is an IRI that can be dereferenced. For eg.

{
  "@context": {
    ...
    "name": "http://schema.org/name"
  },
  "@id": "http://me.markus-lanthaler.com/",
  "name": "Markus Lanthaler",
  ...
}

In terms of Linked Data, we call such pieces of information as node. A node can be represented in a linked data graph. Above example contains a node object identified by the IRI http://me.markus-lanthaler.com/. A node object is simply a JSON object if it exists outside of JSON-LD context.

  • IRI

IRIs are the fundamental part of Linked Data as that is how a property or a node is identified. An IRI can be an absolute IRI, a relative IRI or a compact IRI.

  1. An absolute IRI can be dereferenced and looked upon the web.
  2. A relative IRI is used in relation with a @base value which defines the root of IRI.
  3. A compact IRI is something a shorthand form of writing an IRI, it’s defined in prefix:suffix form where the prefix is the root of IRI and suffix is something that is to be added in the end. For eg.
{
  "@context": {   
     "schema": "http://schema.org/"
  },
  "@id": "http://me.markus-lanthaler.com/",
  "schema:name": "Markus Lanthaler",
}

In above example schema:name expands to IRI http://schema.org/name

In JSON-LD, a string is interpreted as an IRI when it’s a value of an @id member ie

{
  ...
  "homepage": { "@id": "http://example.com/" }
  ...
}

Here the string value http://example.com/ will be treated as an IRI as it’s a value of an @id member.

  • Type Coercion

JSON-LD supports the coercion of values to be of a particular data type. Type coercion is determined using @type key in key-value pair. For eg

  "@context": {
    "modified": {
      "@id": "http://purl.org/dc/terms/modified",
      "@type": "http://www.w3.org/2001/XMLSchema#dateTime"
    }
  },  
  "@id": "http://example.com/docs/1",
  "modified": "2010-05-29T14:17:39+02:00",
}

As we can see in above example, we defined the modified key by giving it an @id which identifies it uniquely and a @type which tells that is a dateTime value. The value of modified key type-coerced automatically as it is defined in the @context. We can also set the type into its JSON body as

{
  "@context": {
    "modified": {
      "@id": "http://purl.org/dc/terms/modified"
    }
  },  
  "modified":
    "@value": "2010-05-29T14:17:39+02:00",
    "@type": "http://www.w3.org/2001/XMLSchema#dateTime"
  }  
}

We used @value key to define its value and then set its type to be of dateTime type. In above example, the way key modified is defined is also known as expanded term definition.
And that’s how we defined the IRIs in the context section above where we defined the @type of key to be of @id. Now it would’ve been clear to you.

Well, I guess that should be enough for this time. But let me tell you this is just a basic intro to how JSON-LD looks like or rather I should say it’s just a tip of the iceberg. There is a lot more to it. I’ve covered some more of its feature in its II part Chasing JSON-LD – Part II. Give it a read also.

References

  1. https://json-ld.org/
  2. https://json-ld.org/spec/latest/json-ld/
  3. What is JSON-LD?

  4. JSON-LD: Core Markup

Apart from above references, I’ll ask you to read the document JSON-LD and Why I Hate the Semantic Web written by one of the primary creators of JSON-LD. It describes the things that were involved during the creation of JSON-LD. It is quite an entertaining yet informative article.

Bidding you goodbye. Meet you next time.
Till then be curious and keep learning!

by gutsytechster at March 10, 2019 07:57 PM

Shiva Saxena (shiva)

What is a makefile?

Hello everyone! Ever wanted to write a shell script to automate a task in your project? For example, after cloning the project, do X task and then manipulate Y file, etc. For the same thing before, I used to write shell script files. So that, after cloning the project a user may run those scripts and get the work done. But better than a shell script it is a good idea to add a makefile. Wondering? Keep reading.

What are makefiles and why are they used?

For me, these files are kind of shortcut to write multiple shell scripts in one file separated by labels and then access each script using that label name.

Best example is installing a software.
You clone it,
then execute configure,
that generates a makefile
and then you run make to execute the makefile for complete installation process.

But all projects might not need to use configure to generate a makefile, I mean what about if your project has nothing to do with installations and all? In this case, to automate some shell tasks you may create a static makefile.

Complete explanation of Makefiles is out of the scope of this post. For in-depth reading, please refer: https://www.gnu.org/software/make/manual/make.html

Demo of a makefile

I would take a quick example to illustrate the working. Lets say, we have a project that user can clone and then following tasks need to be done.

  1. Print a message of running” makefile.
  2. Create a new directory, say dump
  3. Create a file in it, say /dump/trash
  4. Add some text to it, say “Going to trash”

I repeat, these tasks could be done using a shell script file like to-do.sh or anything, but its better to use makefile for such post-downloading tasks.

Lets do it!

1. Make a test directory

$ mkdir demo
$ cd demo

2. Create a makefile

$ touch makefile

3. Add some content

create:
	    $(info Makefile is running!)
 	    mkdir dump
	    touch dump/trash
	    echo "Going to trash" >> dump/trash

4. Execute with make

$ make
Makefile is running!
mkdir dump
touch dump/trash
echo "Going to trash" >> dump/trash

make executes commands written in makefile. That’s it. Simple? So let’s create some variations:

5. Add more labels

Add one more label say drop, such that contents of makefile become as follows:

drop:
        $(info Deleting directory dump/)
        rm -rf dump/

create:
	  $(info Makefile is running!)
	  mkdir dump
	  touch dump/trash
	  echo "Going to trash" >> dump/trash

6. Use makefile with versatality

Now, users may enter 2 different commands:

$ make drop
Deleting directory dump/
rm -rf dump/

$ make create
Makefile is running!
mkdir dump
touch dump/trash
echo "Going to trash" >> dump/trash

$ make
Deleting directory dump/
rm -rf dump/

NOTE: Being on the top, drop is acting as a default target which will be called if make command is entered without any specific label.

I am keeping this post short, will keeping on updating it with as much as I’ll learn. Readers may explore more of makefiles as per their interest 🙂
Here is the link: https://www.gnu.org/software/make/manual/make.html

Conclusion

Makefiles are fantastic! I would be using more of it for post-downloading tasks rather than usingshell script files.

Hope you like makefiles!

Thanks for reading! 🙂

See you in the next post!

by Shiva Saxena at March 10, 2019 12:02 PM

March 09, 2019

Prashant Sharma (gutsytechster)

Semantic Web and Linked Data

Hey Wassup!
I came across something amazing known as Semantic Web which is associated with another awesome concept of Linked Data. We’ll try to understand both of these one by one and would feel their awesomeness.

Semantic Web

What’s all the hype about semantic web? We’ll be knowing it in a few minutes. The term ‘Semantic web’ was given by Sir Tim Berners-Lee best known as the inventor of world wide web. He has described the semantic web as a component of “Web 3.0”.

Let’s see what Wikipedia says about it:

The term was coined by Tim Berners-Lee for a web of data (or data web) that can be processed by machines — that is, one in which much of the meaning is machine-readable.

Now to understand this part. In today’s web, most of the data is available in the form of HTML documents. These HTML documents are linked with each other using hyperlinks. Though when we read a document containing any link, we can tell if the link should be dereferenced or not. We can tell the relation of the link with the given document. But machines or computer software-agents can’t. Machines can also read these documents, but other than typically seeking keywords in a page, machines have difficulty extracting any information from these document. Hence, we needed a way such that a machine can process the data available on the web semantically. So that it can understand the meaning behind the information and work in cooperation with people.

Semantic web approaches this idea by publishing these documents in a format specifically designed for data such as Resource Description Framework(RDF), Web Ontology Language(OWL). RDF describes a statement as triple. A triple consist of a subject, predicate and an object. Let’s say a sentence, Mary is parent of Frank. Here Mary can be seen as subject, Frank can be seen as an object and the relation between these two ie Parent can be seen as predicate(relation). It can also be represented in a structure called as Data graph.

Semantic_WebFig 1. Data Graph

Here we have linked two piece of information through a relation and that’s how semantic web relates two pieces of data telling how the data is related. But there is still some problem. Can you point out?
So when we are talking about Mary and Frank, we actually know which Mary or Frank are we talking about as during a conversation, an environment is built and throughout the conversation we take it as a reference. But computers don’t know what’s the reference. Hence we need to specify exactly which Mary is it or which Frank is it. And we do it using Uniform Resource Identifier(URI). It uniquely identifies whatever there is on the web. Therefore, computers identifies each subject or predicate using the URI. Internally, it can be viewed as:

semantic_web2Fig 2. Data Graph in terms of URI

Here every relation is defined and specific. This linking of data through URIs to define semantic meaning is what we call as the Linked Data.

Linked Data

The different pieces of information across the web can be linked to each other by providing a semantic meaning. A data graph may link with another data graph from all over the web and forms the foundation of semantic web. This linking of data is referred to as Linked Data.

Semantic_web3Fig 3. Graph comprises of two data graphs

When working with Linked Data, we come across two possible questions:-

  • What’s the best way to represent the Linked Data?
  • How to link these data together?

We know the answer for 2nd question, you know it, right? Yeah! using relations and URIs. Though if you talk about 1st question, then there can be multiple answers or rather I should say that there is no best way. It’s all about the use case. There are many formats like HTML, JSON, XML CSV, RDFa etc. One of the formats also known as JSON-LD. It stands for JavaScript Object Notation for Linked Data. As JSON is the most often used data format across web. We needed something that could be used just as JSON but as well support the Linked Data. Here comes JSON-LD. Though usage of JSON-LD is a talk for another time.

To summarize, we can say that

Semantic Web is the “new generation” of hyperlinking (Web 3.0, hypermedia) that contain semantic referencing. Linked Data is the data itself that is described by semantic linking. RDF is the “logical” framework for describing the data (metadata). JSON-LD is one of the possible format on which we can define Linked Data.
By Lorenzo

Big companies like Google, Facebook are already utilizing the use of Linked Data. For eg. Google uses Knowledge Graphs and Facebook uses Open Graph Protocol through the use of something popularly known as OG-tags.

Further Reading

  1. http://www.linkeddatatools.com/semantic-web-basics
  2. https://www.quora.com/What-is-the-Semantic-Web
  3. A short introduction to the semantic web
  4. What is Linked Data?

Well then, it’s time to say goodbye. Meet you next time.

Till then be curious and keep learning!

by gutsytechster at March 09, 2019 08:19 PM

March 08, 2019

Manank Patni

Migrating Existing Data From Sqlite to Other Databases

When we begin our learning journey with Django we use the default database i.e., sqlite. It is very much enough for development and learning purposes. But as we make progress with our project and/or want to switch to a high end databases like MySQL,PostgreSQL etc. we will have to transfer our existing data to the new database.

python manage.py dumpdata -o data.json --format json

Change the settings.py file and connect to another database. After that

python manage.py migrate
This would create the tables according to the models we have made in the new database.

python manage.py loaddata data.json
If run successfully. All the data will be transferred to the new database.

by manankpatni at March 08, 2019 06:20 PM

March 06, 2019

Shiva Saxena (shiva)

How to encrypt USB drives with LUKS

Hello readers! Ever thought about the risk of loosing your USB drive having important data? You surely don’t want others to get that data without your permission. Right? In this case, encrypting your USB device is a recommended way to keep a security layer. Keep reading for a simple tutorial to encrypt USB drives with LUKS.

What is LUKS?

The Linux Unified Key Setup or LUKS is a disk-encryption specification created by Clemens Fruhwirth and originally intended for GNU/Linux. Notice the word specification; instead of trying to implement something of its own, LUKS is a standard way of doing drive encryption across tools and distributions. The reference implementation for LUKS operates on GNU/Linux and is based on an enhanced version of cryptsetup, using dm-crypt as the disk encryption backend.

Starting with the tutorial step by step (I am using Ubuntu 18.04 Bionic Beaver)

1. See available filesystems

 df -hl

2. Connect your USB

3. Find out the new connected device

df -hl  # in my case it was /dev/sdb1

4. Unmount the USB

umount /dev/sdb1

5. Wipe filesystem from the USB

Note: check the drive name/path twice before you press enter for any of the commands below. A mistake, might destroy your primary drive, and there is no way to recover the data. So, execute with caution.

sudo wipefs -a /dev/sdb1
/dev/sdb1: 8 bytes were erased at offset 0x00000036 (vfat): 46 41 54 31 36 20 20 20
/dev/sdb1: 1 byte was erased at offset 0x00000000 (vfat): eb
/dev/sdb1: 2 bytes were erased at offset 0x000001fe (vfat): 55 aa

6. Create a LUKS partition

sudo cryptsetup luksFormat /dev/sdb1 

WARNING!
========
This will overwrite data on /dev/sdb1 irrevocably.

Are you sure? (Type uppercase yes): YES
Enter passphrase: 
Verify passphrase:

7. Open the encrypted drive

sudo cryptsetup luksOpen /dev/sdb1 reddrive
Enter passphrase for /dev/sdb1:
ls -l /dev/mapper/reddrive 
lrwxrwxrwx 1 root root 7 Jul 26 13:32 /dev/mapper/reddrive -> ../dm-0

8. Create a filesystem

I am going with EXT4, you may create any other filesystem as well.

sudo mkfs.ext4 /dev/mapper/reddrive -L reddrive
mke2fs 1.42.13 (17-May-2015)
Creating filesystem with 245500 4k blocks and 61440 inodes
Filesystem UUID: 23358260-1760-4b7b-bed5-a2705045e650
Superblock backups stored on blocks: 
32768, 98304, 163840, 229376

Allocating group tables: done 
Writing inode tables: done 
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

9. Using the encrypted USB

9.1: If you select to mount/unmount your encrypted USB using CLI:

sudo mount /dev/mapper/reddrive /mnt/red
su -c "echo hello > /mnt/red/hello.txt"
  Password:
  ls -l /mnt/red
  total 20
  -rw-rw-r--. 1 root root     6 Jul 17 10:26 hello.txt
  drwx------. 2 root root 16384 Jul 17 10:21 lost+found

sudo umount /mnt/red
sudo cryptsetup luksClose reddrive

9.2: If you just use GUI to use the encrypted USB as I do then a similar dialog will appear:

luks

Just give your passphrase, save your data in it and eject safely. As simple as that!

Resources

Conclusion

LUKS is wonderful, I recommend using it not just to keep your sensitive data secured but also in general.

Hope you are going to make use of LUKS and suggest your friends as well.

Thanks for reading!
See you in the next post 🙂

 

 

by Shiva Saxena at March 06, 2019 10:13 AM

March 03, 2019

Shiva Saxena (shiva)

Testing with unittest.mock

Hello! Just 10 days back, there was a time when I tweet this.

While I always found it difficult, some people say writing mock test is super easy. I think it’s time for me to code more modular.

After a week of making the tweet, I set myself to read the official documentation with patience. The more I read, the more I started liking the tool. In the end, I understand people were right in saying that “mock tests are easy”. Below is a quick overview of what I could understand of writing mock test cases.

What is unittest.mock?

In short:

unittest.mock is a library for testing in Python. It allows you to replace parts of your system under test with mock objects and make assertions about how they have been used.

mock objects here refers to kind of dummy objects. Whose behavior is under your control.

With example:

Let’s say you are making a web app in integration with another ticket managing web app, you are using its API in your code for a specific purpose, say buying tickets and gettings back the ticket id. So you write a code that sends a request to ticket managing app and gets the ticket id in response. But, here is the twist!

The ticket managing web app is kinda money minded and don’t want to entertain your request free of cost. So, you need to pay a little amount every time you make a request. Okay? Now you have written the code, you tested in 2-3 times and paid a small amount to the other app. That doesn’t matter much. But if you are a good developer, then you must have written automated test cases to test the behavior of your app. And every time you run the test suit, a couple of requests are made that costs you again, Another-Small-Amount.

In vigorous development, you run test suit countless times, and if every time it is going to charge you a small amount it is well understood that its gonna give your wallet a really big deal.

Here comes, mock test. Using which you may kind of deactivate the functions of that web app API and assume their response. So you don’t need to send a real request to other application and this way you save your money 🙂

Use cases

You write mock tests:

  • while using 3rd party APIs. – As you want to test YOUR API, not their.
  • while your code makes requests on a remote resource using the internet – As you might want to run your test cases even at places without internet.
  • while sending request to an async tool – like celery beat, suppose the beat is set to 5 minutes, so it will run only after every 5 minutes, but its not a good idea to keep your test suit on hold till the next beat, so you just test calling the celery task not the actual running of that task.
  • while you want to set explicitly the return value of a function – As you might want to test your feature for multiple return values of a function.
  • while you want to explicitly raise an exception while a particular function gets called – As you might want to test the working of your code in a situation of getting encountered with an exception.

Example with mock.patch

There are lots of functions available in unittest.mock. For me, the patch is found to be the most useful. That’s why, I am showing just the patch function in this example that too very briefly, readers may explore more, as per their interest.

case 1: As a function decorator

File: 1

# file at project/app1/foo_file

def foo(arg1, arg2):
    return arg1 + arg2

File: 2

# file at project/app2/bar_file
from project.app1.file1 import foo

def bar():
    try:
        return foo(1, 1)
    except NameError:
        return "Error"

File: 3

# file at project/app3/test_file
from unittest import mock

@mock.patch('project.app2.bar_file.foo')
def test_bar(mock_foo):
    # Here foo() is now mocked in bar_file, and this mocked function
    # is passed to kwarg: mock_foo for further references.

    bar()
    # Calling the function under test
    
    # testing if mock function was called
    assert mock_foo.assert_called_with(1, 1) is None
    assert mock_foo.assert_called_once_with(1, 1) is None
    
    # manipulating the return value of mock function
    mock_foo.return_value = 5
    assert bar() == 5

    # manipulating the mock function to raise exception where it gets called
    mock_foo.side_effect = NameError('reason')
    assert bar() == "Error"

NOTE: Where to patch?
We need to patch the function where it is gettings used, not where it is defined. In the example above foo is defined in foo_file but used in bar_file thus we mocked the foo function in bar_file(see argument passed to @patch()).

case 2: As a context manager

In the example above, we patched a function foo in a complete function, but if we don’t want that instead, we just to mock a function for a limited scope in a test function. Here it is how to.

File: 3    (File: 1 and File: 2 remains same)

# file at project/app3/test_file
from unittest import mock

def test_bar():
  with mock.patch('project.app2.bar_file.foo') as mock_foo:
      # Here foo() is now mocked in bar_file, and this mocked function
      # can now be referenced using mock_foo.
      
      mock_foo.return_value = 5
      assert bar() == 5
      # Inside 'with' scope: mocked behavior present

  assert bar() == 2
  # Outside 'with' scope mocked behavior absent

Explore more unittest.mock

Conclusion

I see unittest.mock as a really useful tool for all the use cases listed above. I hope you don’t find mock testing difficult, but if you do, then I seriously suggest to read the official docs, they are just lovely and shows the power of documentation!

Thanks for reading! See you in the next post 🙂

by Shiva Saxena at March 03, 2019 06:40 PM

February 26, 2019

Shiva Saxena (shiva)

Git stash is really cool

Ever messed up with git repositories (who hasn’t)? Git stash may turn out to be a lifesaver. It has some really cool options. Let’s check them out!

Get acquainted with stash methodology

Stash simply does a clearing job. You want to pull changes to your local repo and current changes are blocking the pull. You may use git stash to send them in the background that later on can be popped out. As simple as that.

Stash options:

  • push
  • list
  • show
  • pop
  • apply
  • branch
  • clear
  • drop

Their brief definitions are available in git-stash manual. Feel free to give a quick look at man git-stash. Following are some example based usage in brief.

git stash push

Or you can say just git stash if you don’t want to use any other option with it. It simply pushes your current changes (both staged or not staged) to stash area.

mkdir test_stash
cd test_stash
git init
touch cool_file
git add cool_file
git commit -m "Add cool_file"
echo "Hello stash" >> cool_file

Now, these changes are not staged. Try:

git status
  modified:    cool_stash
git stash
  Saved working directory and index state WIP on master: 2855c2a Add cool_file

It’s like you made some changes and before making the commit you want to try a different approach. So, you just stashed your current changes and try out a different approach if you like it, then keep it. If you don’t like it, then git reset --hard and we can bring back our old changes with pop option shown below.

git stash list

Let’s do one more stash entry first.

  echo "hello again" >> cool_stash
git stash
  Saved working directory and index state WIP on master: 2855c2a Add cool_stash

Now if you have done a couple of stash entries, then you may have a look on the stash list with:

git stash list
  stash@{0}: WIP on master: 2855c2a Add cool_stash
  stash@{1}: WIP on master: 2855c2a Add cool_stash

Here are our 2 stash entries.

stash@{0} is new and thus on the top of the stack.
stash@{1} is the old one.

But these are not specific. I want to know what changes are stored in each stash entry. Let’s go ahead 🙂

git stash show

To see changes (diff) stored in any stash entry use

git stash show -p stash@{0}
  diff --git a/cool_stash b/cool_stash
  index e69de29..13ab7f7 100644
  --- a/cool_stash
  +++ b/cool_stash
  @@ -0,0 +1 @@
  +hello again

For the recent stash, you may omit stash@{0}.

git stash pop

Okay, now I want to take out the stashed changes in stash@{1}. It’s simple.

git stash pop stash@{1}

Note that

  • popped changes are unstaged
  • popped stash is no longer present in stash area. Verify it with git stash list.

What if we want to pop out a stash entry without removing it from the stash area? Here comes the next option.

git stash apply

First, send the current changes back to stash and then try apply.

git stash
  Saved working directory and index state WIP on master: 2855c2a Add cool_stash
git stash list
  stash@{0}: WIP on master: 2855c2a Add cool_stash
  stash@{1}: WIP on master: 2855c2a Add cool_stash
git stash apply
  modified:    cool_stash
git stash list
  stash@{0}: WIP on master: 2855c2a Add cool_stash
  stash@{1}: WIP on master: 2855c2a Add cool_stash

Note that

  • applied changes are unstaged
  • applied stash is present in stash area. Verified it with git stash list.

git stash branch

In case you want to pop out a stash entry but on a new branch, then you make the use of this option. Example

git status
  modified:    cool_stash
git stash
  Saved working directory and index state WIP on master: 2855c2a Add cool_stash
git stash list
  stash@{0}: WIP on master: 2855c2a Add cool_stash
  stash@{1}: WIP on master: 2855c2a Add cool_stash
  stash@{2}: WIP on master: 2855c2a Add cool_stash
git stash branch side_branch stash@{0}
  Switched to a new branch 'side_branch'
  On branch side_branch
  Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git checkout -- <file>..." to discard changes in working directory)
  
  modified: cool_stash

  no changes added to commit (use "git add" and/or "git commit -a")
  Dropped stash@{0} (919c486edb34e276383eb1682db0c29ac7eb9623)

Note:

  • After branching successfully that applied stash is dropped out.
  • Branching will be successful, but if the applied changes are creating conflict, then the applied stash is not dropped out.

Useful as per the manual page:

This is useful if the branch on which you ran git stash push has changed enough that git stash apply may fails due to conflicts. Since the stash entry is applied on top of the commit that was HEAD at the time git stash was run, it restores the originally
stashed state with no conflicts.

I understand that if you fear to apply stash changes to the current HEAD because you believe that it may create conflict, then you may use this option to stash out on a test branch.

git stash clear

How many stash entries do you have now in your master? It may be any number, to remove all of them in one shot, clear the stash area with this option.

git stash list
  stash@{0}: WIP on master: 2855c2a Add cool_stash
  stash@{1}: WIP on master: 2855c2a Add cool_stash
  stash@{2}: WIP on master: 2855c2a Add cool_stash
git stash clear
git stash list

Moving on!

git stash drop

It simply drops (delete) stash entry. Let’s say.

echo "stash stash stash" >> cool_stash
git status
  modified: cool_stash
git stash
  Saved working directory and index state WIP on master: 2855c2a Add cool_stash
git stash list
  stash@{0}: WIP on master: 2855c2a Add cool_stash
echo "git git git" >> cool_stash
git stash
  Saved working directory and index state WIP on master: 2855c2a Add cool_stash
git stash list
  stash@{0}: WIP on master: 2855c2a Add cool_stash
  stash@{1}: WIP on master: 2855c2a Add cool_stash
git stash drop stash@{1}
  Dropped stash@{0} (bd2b6c6d98742ca504677cf36ddb6bc93d535654)
git stash list
  stash@{0}: WIP on master: 2855c2a Add cool_stash

Some less popular options in usage for git stash are also available:

  • git stash save
  • git stash create
  • git stash store

Conclusion

It is fun to use git stash at times. I remember I used it in bringing a deleted file back. I was working on a git repo and had some current changes some were staged and some were unstaged and accidentally I deleted a useful file. I wanted to get that file back and I knew it that I can do it with git reset --hard to revert all local changes. But at the same time, I didn’t want to loose my current changes. What I did, I stashed my current changes (except the change of deleting the file) and did git reset --hardI got my file back and then I poped out stashed changes. Simple? Yeah, when you know it.

I used interactive git stash in that case with

git stash push --patch

Where I could exactly choose what hunk of changes to be stashed (as I didn’t want to stash the change of deleting the file).

Hope you liked it. See you in the next post! o/

Thanks for reading! 🙂

 

 

by Shiva Saxena at February 26, 2019 04:46 PM

February 25, 2019

Shiva Saxena (shiva)

My experience in HackVSIT-2k19

Hello everyone! Recently, I went to a Hackathon (HackVSIT) held at Vivekanand Institute of Professional Studies. Following are some memorable glimpses of the event.

My friend InquiridorTechie has already described a quick post describing the event nicely in a timeline. So I decided to write just about my top 5 favourite moments 🙂 Let’s hit the countdown.

#5 – Food and snacks

In short they were really delicious! We had a lunch, evening snacks, dinner, midnight snacks, breakfast and we enjoyed each one of them. Believe me, that samosa souce was exceptional!

I don’t know how to explain food times more than that. Moving ahead.

#4 – Ideation and naming convention

So, we were in hackathon opening ceremony and the hack was about to begin but surprisingly we had nothing in mind to work on. It is not that we couldn’t get any new idea to work on but it was more like, that we wanted to work on a real problem solving software rather than assuming a hypothetical problem and solving it abstractly.

We all were scratching our heads to come out with a useful idea. And after the hardwork of around 1 hour we finally come up with a new yet interesting problem solving idea (at least we, think it to be nice)

The idea being “reducing the hardwork of developers to make their custom dotfile setup by providing them a command line application that can do the work for them.”

In first thought it appears to be simply useful. You just need to run our CLI app and your dotfile setup is ready to upload anywhere. Hurray! Isn’t that great?

Now, the second thing was to give it a “name”. I always like this part. Soon we started to come up with new names and kept rejected each one of them. Later keeping in mind some useful tools like ‘kiwi’, ‘celery’, ‘redis’, etc we found out that it is a good idea take the name of any eatable. And first we considered donutbut it was already taken. After going through a couple of dishes we came up with Oliv(removed ‘e’ from olive). We all liked it and went ahead with it.

#3 – Gotchas with git and pip

The more you work with git, the more tricks and rules you learn. So, we were working on our idea and were using git/GitHub to organize things. Since we were commiting and pushing to a same branch so couple of time we messed things up.

I remember InquiridorTechie once commited in local repo without making the pull first. There we got reminded with the trick to undo a local commit that is:

$ git reset --soft HEAD~1

But the real gotha is this link: https://stackoverflow.com/questions/24568936/what-is-difference-between-git-reset-hard-head1-and-git-reset-soft-head, didn’t know about that mixed and keepversions of git reset.

You know the full form of pip? I didn’t know that but wikipedia says

pip is a recursive acronym that can stand for either “Pip Installs Packages” or “Pip Installs Python”.
Alternatively, pip stands for “preferred installer program”.

Also, any time I needed to know the version of a package installed via pip, what I used to do is:

  • Run python CLI
  • help(‘package’)

But GutsyTechster informed me about show option of pipnow I do like: pip show package🙂

#2 – Final and only evaluation

It was awesome! I never really pitched any idea and code before, like this to evaluators. We explained to evaluators our idea, problem statement, how it solves the problem and what tech stack we have used.

We showed them the working of the prototype we built during the 24 hrs hackathon. And we were happy to know that evaluator liked our idea as they said so. Moving ahead to the final and the best part.

#1 – Evaluators are coming!

OMG! What a moment it was! I and InquiridorTechie didn’t have any experience of having an evaluator that analyze code/idea/implementation and who’s knows what questions they may come up.

It was around 7pm when teams were waiting for evening snacks and it was around 8:30pm that we finally got it 😛 doesn’t matter for me, I wasn’t hungry. What matters to me is, the announcement just after the evening snacks that “Evaluators are coming within 10 minutes”. Oh really? Usual butterflies in my stomach, haha! How they are going to evaluate? My mind was rushing!

Soon, we assigned all of us some task to get our idea ready with implementation level 1, so that at least we can show something to evaluators. Though evaluators didn’t come up even all night, but that 1-hour rush was simply amazing. I think we did equivalent work in that 1 hour that we had been doing the whole day.

Conclusion

Overall, it was a great experience! I would love to join HackVSIT in the next year. I would like to thank each one them to organize the great event.

Thanks for reading!

by Shiva Saxena at February 25, 2019 04:41 PM

February 23, 2019

Jagannathan Tiruvallur Eachambadi

February 22, 2019

Kuntal Majumder (hellozee)

Windows : A true nightmare

I never expected such a dreadful day would come such that I have to install Windows, cause I didn’t have enough money. And yes, you read it right, nothing is wrong in the previous statement.

February 22, 2019 10:01 AM

February 20, 2019

Neeraj Kumar Arya (InquiridorTechie)

HackVSIT 2k19

vips

Hello, Friends

It’s have been a long time that I haven’t post any blog. I was learning new stuff and busy in my college curriculum. Finally, I got time to write something new. So, now here I will share my first experience in the hackathon.

I am sure most of you know what is Hackathon? Although I am briefly explaining it.

The word “hackathon” is a portmanteau of the words “hack” and “marathon”, where “hack” is used in the sense of exploratory programming, not its alternate meaning as a reference to computer security.

A hackathon (also known as a hack day, hackfest or codefest) is a design sprint-like event in which computer programmers and others involved in software development, including graphic designers, interface designers, project managers, and others, often including subject-matter-experts, collaborate intensively on software projects. The goal of a hackathon is to create usable software or hardware with the goal of creating a functioning product by the end of the event.

I was very keen to participate in the hackathon but due to lack of knowledge and confidence, I always took my step back. I had heard from others that in the hackathon you have to develop a software or something to solve a particular problem. I didn’t have any knowledge of development. Now from this year I decided to participate in the hackathon and explore my skills. HackVSIT gave me this opportunity. We four friends decided to register ourselves for the event. And luckily we got a confirmation email 2 days before the hackathon’s date.

The day before the hackathon.

We all were excited to participate. I left my relative’s wedding for this hackathon. But unfortunately one of our friend had to leave the city for some reasons. He talked with Hackathon organizers and they allowed us to participate with 3 members. We rest were thinking about which problem should we have to work. We had 8 tracks for this hackathon.

  1. Human Resources
  2. Blockchain
  3. Mental Health
  4. Fintech
  5. Tools for Developers and Designers
  6. Smart city
  7. IOT,
  8. Computer Vision.

We discussed and select Finetech, Tools for developers and Human resource are good to work upon. Rest tracks were also good but we were not good at them. As I told you this was my first hackathon, Prashant and I talked about 1 hour that night. He told me what things should I bring with myself apart that mention in the FAQs. We discussed on the real-life problems and planned our tomorrows work. And then we go to take a long nap because the next day we had to wake up 24 hours.

Hack Day

I woke up at 6:30 in the morning, packed my things and left home at 7:40. I reached at 9:00 am at Haiderpur metro station, waited for my two friends then we all 3 rushed to the college (venue). After registration, we got room no. where we hack the whole night. But first, we enjoyed opening ceremony of the hackathon in the auditorium. VIPS introduced their chief guest, mentors, evaluators, sponsors etc. Meanwhile, Shiva (one of our team member) got a cracking idea to work upon and we all agreed to that idea. The idea was simple but unique. As our project is Open source you can check our idea here on GitHub.

The organizers of VIPS was very helpful and calm. The arrangement made by them was nice. They provide us a delicious lunch. I was enjoying every moment and sharing our pics on twitter. After lunch, we began to work with complete devotion. Until dinner, we had completed 40% of the work. Then dinner, then work this is all about the hackathon where you get dirty your hand with the code and you don’t care about anything. The same happened with me I forgot to sleep, eat. We hack whole night, wrote shell scripting in python, share our midnight snacks pic on twitter.

hack

End of the Hackathon

Evaluator came in the morning they liked our project and congratulated us. However, we were not selected in the top 12 teams but our experience was fantastic. In the end, we got a participation certificate and stickers. One more thing which I experienced is that after this hackathon I got a sweet sleep.

I would like to thanks VIPS’s team for organizing such a great event. I feel honored to get selected for this hackathon. In this hackathon, I learned we should push our limits, think out of the box then only we can achieve something. I will continuously take part in the upcoming hackathons of this year and explore my skills and knowledge.

References

1. https://en.wikipedia.org/wiki/Hackathon

I hope you liked this blog!

See you soon!

Have a good day!

InquiridorTechie.

by inquiridortechie at February 20, 2019 05:28 PM

February 18, 2019

Bhavin Gandhi

Entering The Church of Emacs

From the title you may think this is another post or debate on <your favorite editor here> vs GNU Emacs. I will be talking about how I started using Emacs and why it is the best tool I have ever used. How it started During dgplug summer-training 2018, we had series of sessions on GNU Emacs. Thanks to mbuf, the whole series helped me a lot to get started with Emacs and now I use it for all my editing.

by @_bhavin192 (Bhavin Gandhi) at February 18, 2019 04:31 PM

February 17, 2019

Priyanka Sharma

Journey with nature: The Guava Leaf

g.jpg

Seeing and observing are two different things. We are all familier with guavas and we must have eaten them too. But have you ever tried to observe anything that what benefits they may give to you ? I have two guava trees in my garden since past 10 years, I have seen them growing up from a sapling to a tree but never had observed them, I came across many great beneficial things, guava leaves can provide.

Guava is known well as tropical fruit which rich in nutrients throughout the world. People loves to eat it as it has sweet and juicy flavor. Not only consumed as food, guaya also being used in medicinal purpose. The fruit, leaf and other parts of guava has been proved may give benefits to human health. Scientific studies have documented the healthful qualities of the superfruit’s leaves, and you can see what they’ve found for a variety of conditions below:

1. Diarrhea

  • Guava leaf in medicinal purpose is mostly used to treat diarrhea. Diarrhea is a condition where the colon cant absorb water due to bacterial infection of Staphylococcus aureus. Study reported that guava leaf has strong anti-bacterial compound such as tannins and essential oil which very effective to fight against S. Aureus infection and inhibit those bacteria growth.
  • The way to use guava leaves to cure diarrhea is by taking 6 guava leaves, then wash it. Then, boil it through and squeeze the leaves. Next is you get the leaves extract. Then, just drink it straight once in two days until you feel much better.
  • People suffering from diarrhea who drink guava leaf tea may experience less abdominal pain, fewer and less watery stools, and a quicker recovery, according to Drugs.com. Add the leaves and root of guava to a cup of boiling water, strain the water and consume it on an empty stomach for quick relief.da.jpg

2. Lowers Cholesterol

  • It is surprising that guava leaf can reduce the level of cholesterol in bloodwhich can cause many health problems. Studies reported that guava leaf contains active phytochemical compounds such as gallic acid, cathechin and epicathecin which can inhibit pancreatic cholesterol esterase which slightly reduce cholesterol level.
  • LDL or Low-density lipoprotein are one of the five major groups of lipoproteins which transport all fat molecules throughout your body. It is the excess of this class of cholesterol that may cause a host of health disorders particularly that of heart. According to an article published in Nutrition and Metabolism, study participants who drank guava leaf tea had lower cholesterol levels after eight weeks.

ch.jpg

3. Manages Diabetes

  • Japan has approved guava leaves tea as one of the foods for specified health uses to help with the prevention and treatment of diabetes. The compounds in the tea help regulate blood sugar levels after meals, by inhibiting the absorption of two types of sugars – sucrose and maltose. According to an article published in Nutrition and Metabolism, guava leaf tea inhibits several different enzymes that convert carbohydrate in the digestive tract into glucose, potentially slowing its uptake into your blood.
  • Cathechin in guava leaf is not only can burn the fat but it also can control the blood glucose level or in other name it has hypoglycemic effect to the body. This may help to prevent the development of diabetes especially type 2 that also become a consequent along with developing obesity.

diab.jpg

4. Promotes Weight Loss

  • Looking to shed the extra inches around your belly? Sip into guava leaf tea. Guava leaves help prevent complex carbs from turning into sugars, promoting rapid weight loss. Drink guava leaves tea or juice regularly to reap the benefits.

wl.jpg

5. Fights Cancer

  • Due to high quantities of the antioxidant lycopene, various studies have revealed that lycopene plays a significant role in lowering the risk of cancer.
  • Many studies have been conducted to found the components and benefits of guava leaf. One of best benefits that you may found in guava leaf is anti-cancer activity. It has been proved that guava leaf can reduce the risk of several types of cancer such as gastric, breast, oral and prostate cancer. This benefits performed by the antioxidant contains in guava leaf such as quercetin, lycopene and Vitamin C. Those components can induce the apoptosis or self-killing activity of cancer cells according to a study which published in 2011.

ca.jpg

How to Make Guava Leaves Tea

To get all those benefits you can start to consume it by making guava leaves as tea. Below is several steps to make guava tea :

  1. Dry some young guava leaves
  2. After they got dry, crush them into powder
  3. Use one tablespoon of guava leaves and add it to one cup of hot water
  4. Let it brew for 5 minutes then you can strain it
  5. Drink guava leaves tea regularly, once a day

Those are all benefits that you may get from guava leaves. You can consider it as natural remedy which has many good effect to your body and of course low cost medicine which you can get almost anywhere.

tea.jpg

by priyanka8121 at February 17, 2019 04:48 PM

Jagannathan Tiruvallur Eachambadi

(Neo)vim Macro to Create Numbered Lists

I usually encounter this when saving notes about list of items that are not numbered but are generally better off being itemized. Since this is such a common scenario I did find a couple of posts1 2 that explained the method but they had edge cases which were not handled properly.

Say you want to note down a shopping list and then decide to number it later,

Soy milk
Carrots
Tomatoes
Pasta

Start off by numbering the first line and then move the cursor to the second line. Then, the steps are

  1. Start recording the macro into a register, say a, by using qa.
  2. Press k to go one line up.
  3. yW to copy one big word, in this case “1. ”.
  4. Then j to come one line down and | to go to the start of the line.
  5. Use [p to paste before and | to go the beginning.
  6. To increment, Ctrl+A and then j and | to set it up for subsequent runs.

To run the macro, go to the next line and execute @a. For repeating it 3 times, you can use 3@a.

by Jagannathan Tiruvallur Eachambadi (jagannathante@gmail.com) at February 17, 2019 12:05 PM

February 16, 2019

Priyanka Sharma

Networking: Heart of World !

Ask ten different people what networking is and you may get as many as ten different answers. A person’s definition of networking probably depends upon their use of this important personal and professional activity. Whether you network to make new friends, find a new job, develop you current career, explore new career options, obtain referrals or sales leads, or simply to broaden you professional horizons, it is important to focus on networking as an exchange of information, contacts or experience.

Networking is one of the most fascinating thing ever. Here, I am writing and at my place and you are reading from your place. This is cool. But have you ever wonder how the world’s scenario would be without this heart ? This is beyond my imagination. Being a Computer Science student, I spend most of the time sitting in front of the laptop doing coding stuffs, programming, web development and much more. So, slowly I have developed interest in computer networking.

How the Computer network works really ? How the network has developed to this vast extent ? What will happen to the world without networking ? These are some of the questions that is fascinating me more and more that it can’t stop me from writing this.

Computer networking is the practice of interfacing two or more computing devices with each other for the purpose of sharing data. Computer Networks are built with a combination of hardware and software.net

Clients and Servers

An important relationship on networks is that of the server and the client. A server is a computer that holds content and services such as a website, a media file, or a chat application. A good example of a server is the computer that holds the website for Google’s search page: http://www.google.com. The server holds that page, and sends it out when requested.

A client is a different computer, such as your laptop or cell phone, that requests to view, download, or use the content. The client can connect over a network to exchange information. For instance, when you request Google’s search page with your web browser, your computer is the client.

MAC address

Imagine MAC addresses like people addresses or phone numbers. You can’t have two persons have the same MAC Address. The thing about MAC address is that it’s only used in LANs. It’s an address that is only usable inside a local network. You can’t send data to a device in a different network using it’s MAC as destination, but you can send data to devices in your local networks using MAC address as identifier.

When a device is manufactured, it’s chip has provided an address called MAC address. A media access control address of a device is a unique identifier assigned to a network interface controller for communications at the data link layer of a network segment. mac

Traditional MAC addresses are 12-digit (6 bytes or 48 bits hexadecimal numbers. By convention, they are usually written in one of the following three formats:

  • MM:MM:MM:SS:SS:SS
  • MM-MM-MM-SS-SS-SS
  • MMM.MMM.SSS.SSS

IP Address

For a computer to communicate with another computer it needs an IP address, and it must be unique. If there is another computer on the same network with the same IP there will be an IP address conflict and both computers will lose network capability until this is resolved.

The IP address consists of 4 numbers separated by decimals. The IP address itself is separated into a network address and a host address. This means that one part of the IP address identifies the computer network ID and the other part identifies the host ID.
As an example, an IP address of 192.168.0.45 is known as a class C address (more on classes later). A class C networks uses the first 3 numbers to identify the network and the last number to identify the host. So, the network id would be 192.168.0 and the host id would be 45. Computers can only communicate with other computers on the same network id. In other words networking will work between 2 computers with IPs 192.168.0.231 and 192.168.0.45 respectively but neither can communicate with 192.168.1.231 because it is part of the 192.168.1 network.ip

                              IP address = Network ID part + Host ID part

An IP address has two components, the network address and the host address. A subnet mask separates the IP address into the network and host addresses (<network><host>). Subnetting further divides the host part of an IP address into a subnet and host address (<network><subnet><host>) if additional subnetwork is needed.

Sub-Classes of IP addressing:

bi.jpg

The 32 bit IP address is divided into five sub-classes. These are:

  • Class A
  • Class B
  • Class C
  • Class D
  • Class E

Each of these classes has a valid range of IP addresses. Classes D and E are reserved for multicast and experimental purposes respectively. The order of bits in the first octet determine the classes of IP address. The class of IP address is used to determine the bits used for network ID and host ID and the number of total networks and hosts possible in that particular class. Each ISP or network administrator assigns IP address to each device that is connected to its network. IPv4 address is divided into two parts:

  • Network ID
  • Host ID

Class A:

IP address belonging to class A are assigned to the networks that contain a large number of hosts.

  • The network ID is 8 bits long.
  • The host ID is 24 bits long.

The higher order bit of the first octet in class A is always set to 0. The remaining 7 bits in first octet are used to determine network ID. The 24 bits of host ID are used to determine the host in any network. The default sub-net mask for class A is 255.x.x.x. Therefore, class A has a total of:

  • 2^7= 128 network ID
  • 2^24 – 2 = 16,777,214 host ID

ca.jpg

Class B:

IP address belonging to class B are assigned to the networks that ranges from medium-sized to large-sized networks.

  • The network ID is 16 bits long.
  • The host ID is 16 bits long.

The higher order bits of the first octet of IP addresses of class B are always set to 10. The remaining 14 bits are used to determine network ID. The 16 bits of host ID is used to determine the host in any network. The default sub-net mask for class B is 255.255.x.x. Class B has a total of:

  • 2^14 = 16384 network address
  • 2^16 – 2 = 65534 host address

cb.jpg

Class C:

IP address belonging to class C are assigned to small-sized networks.

  • The network ID is 24 bits long.
  • The host ID is 8 bits long.

The higher order bits of the first octet of IP addresses of class C are always set to 110. The remaining 21 bits are used to determine network ID. The 8 bits of host ID is used to determine the host in any network. The default sub-net mask for class C is 255.255.255.x. Class C has a total of:

  • 2^21 = 2097152 network address
  • 2^8 – 2 = 254 host address

cc.jpg

Class D:

IP address belonging to class D are reserved for multi-casting. The higher order bits of the first octet of IP addresses belonging to class D are always set to 1110. The remaining bits are for the address that interested hosts recognize.

Class D does not posses any sub-net mask. IP addresses belonging to class D ranges from 224.0.0.0 – 239.255.255.255.

cd.jpg

Class E:

IP addresses belonging to class E are reserved for experimental and research purposes. IP addresses of class E ranges from 240.0.0.0 – 255.255.255.254. This class doesn’t have any sub-net mask. The higher order bits of first octet of class E are always set to 1111.

ce.jpg

Subnet:

Maintaining a smaller network is easy and we can provide security of some particular network from other network by dividing a network into many smaller networks. This is called subnet.

sub

Subnet Mask:

A subnet mask is a mask used to determine what subnet an IP address belongs to. An IP address has two components, the network address and the host address.

It is called a subnet mask because it is used to identify network address of an IP address by perfoming a bitwise AND operation on the netmask. A Subnet mask is a 32-bit number that masks an IP address, and divides the IP address into network address and host address.

Subnet Mask is made by setting network bits to all “1”s and setting host bits to all “0”s. Within a given network, two host addresses are reserved for special purpose, and cannot be assigned to hosts. The “0” address is assigned a network address and “255” is assigned to a broadcast address, and they cannot be assigned to hosts.

Advantage of Subnet Mask:

Given an IP address, if it is bitwise AND with the Subnet Mask, then we will get the network ID of the network to which this particular IP address belongs to.

IP address: 200.1.2.130

This means that a packet is to be sent to host 200.1.2.130 and we have to find out what is the network in which this particular host belongs to.

  • Convert the IP address to 0’s and 1’s bits: 200.1.2.130 converted to:

11001000.00000001.00000010.10000010

  • Let, Subnet Mask is: 255.255.255.192

11111111.11111111.11111111.11000000

  • Performing bitwise AND-

11001000.00000001.00000010.10000010

11111111.11111111.11111111.11000000  

We will get: 11001000.00000001.00000010.10000000

i.e. 200.1.2.128

Hence, 200.1.2.130 belongs to the network 200.1.2.128

 

                                                                                        

 

by priyanka8121 at February 16, 2019 04:24 PM

February 14, 2019

Mohit Bansal (philomath)

Moving!

It's been a long time, since I published anything here, but that doesn't mean I stop writing, I kept writing everyday, just didn't publish anything so far since October 2018. This post is the announcement of moving my blog from here to somewhere else. I know, I should have published my writings but the reason I didn't is, blogger doesn't support markdown, and for the same reason, I will be

by Abstract Learner (noreply@blogger.com) at February 14, 2019 09:25 AM

February 13, 2019

Prashant Sharma (gutsytechster)

Get start with Django Rest Framework

Hey there everybody!
I was learning the concept of APIs to get start with Django Rest Framework(popularly known as DRF). As soon as I understood its basics, I headed on towards DRF. It was really fun learning it and I bet you will have it also.

Let’s get started

I am directly going to jump on coding and then we are going to understand the things on the way. We will be creating an event reminding app API throughout this tutorial.

So, let’s start with creating a virtual environment. Virtual environments are very helpful when you are working with different projects with same dependencies with different versions. And they ease out a lot of work for sure. So, we are going to use them. There are a few options to create virtual environment in python. Though I am going to use pipenv.

If you don’t have it installed then go ahead and install it using pip.

$ pip install pipenv

Now then create a directory anywhere in your system. I’d prefer to be in home.

~$ mkdir RemindEvent && cd RemindEvent

Once we are inside the directory, we create the virtual environment as

~$ pipenv install django djangorestframework

The above command will create a virtual environment along-with installing the python packages `django` and `djangorestframework`. Once it’s done, we can activate our virtual environment as

~$ pipenv shell

Now, you would be seeing the terminal prompt starting with (RemindEvent).

We’ll now start the project using django command as

(RemindEvent)~$ django-admin startproject RemindEvent

Now then once we create the project, we will be creating an app using django. First go into RemindEvent directory in your main folder and then run

(RemindEvent)~$ python3 manage.py startapp Event

Once you are done with it. Your directory structure would look as

.
├── Pipfile
├── Pipfile.lock
└── RemindEvent
    ├── Event
    │   ├── admin.py
    │   ├── apps.py
    │   ├── __init__.py
    │   ├── migrations
    │   │   └── __init__.py
    │   ├── models.py
    │   ├── tests.py
    │   └── views.py
    ├── manage.py
    └── RemindEvent
        ├── __init__.py
        ├── settings.py
        ├── urls.py
        └── wsgi.py

If it looks like this, then great work! Now our project is setup perfectly and we are ready to get our hands dirty with code.

Since we have created the app, we need to register it in settings.py. However along with the app we would also need to register rest_framework app.

INSTALLED_APPS = [
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',
    'rest_framework',
    'Event',
]
  • Models

We will be creating models to store database. Hence we need to define its schema. So, go ahead and write the following in models.py.

from django.db import models


class Event(models.Model):
    """This class represents Event model"""

    name = models.CharField(max_length=255, blank=False)
    creation_date = models.DateTimeField(auto_now_add=True)
    modified_date = models.DateTimeField(auto_now_add=True)
    alert_date = models.DateTimeField()
    alert_interval = models.DurationField()

    def __str__(self):
        return f"{self.name}"

Once we are done creating models we need to perform migrations

(RemindEvent)~$ python3 manage.py makemigrations
(RemindEvent)~$ python3 manage.py migrate

It will create the corresponding database tables into your django project.

  • Admin

Now then we are done with creating models, we go ahead and register them in admin.py so that it appears in Django’s default admin panel.

from django.contrib import admin
from .models import Event

admin.site.register(Event)
  • Serializer

Now this is something where DRF actually participates. Serializers help to convert complex data like model instances into python native data types which can then be rendered into types like JSON or XML which acts as a request-response data format. Just as ModelForm defines set of rules to directly convert model fields into form fields, rest_framework’s serializer class provides ModelSerializer.
Now then we know what serializers are, let’s create them. Create a new file in your Event directory as serializers.py and write the following into it

from rest_framework import serializers
from .models import Event


class EventSerializer(serializers.ModelSerializer):
    """This class serializes the Event model instance into formats like JSON"""

    class Meta:
        model = Event
        fields = ('id', 'name', 'creation_date',
                  'modified_date', 'alert_date', 'alert_interval',)
        read_only_fields = ('creation_date', 'modified_date',)

We inherit the ModelSerializer class provided by rest_framework.serializers to our EventSerializer class and defines it like it. ModelSerializer class itself maps the each model field to its corresponding serializer field. We define creation_date and modified_date fields as read only ie they can’t be edited manually.

  • Views

We define class based views while creating APIs. Though one can use function based views also. But class based views has its own advantage. It provides better code reusability, cleaner and less code and better coupling. Especially, since DRF provides built-in django functionalities in the form of class, we can inherit them and override their features as per our requirements. DRF provides generic built-in views to ease out our work.

Well, that’s enough talking. Let’s write some code in views.py file.

from rest_framework import generics

from .serializers import EventSerializer
from .models import Event


class CreateView(generics.ListCreateAPIView):
    """This view performs GET and POST http request to our api"""
    queryset = Event.objects.all()
    serializer_class = EventSerializer


class DetailsView(generics.RetrieveUpdateDestroyAPIView):
    """This view performs GET, PUT and DELETE http requests to our api"""
    queryset = Event.objects.all()
    serializer_class = EventSerializer

Now let’s understand what does all this do. Firstly we import generics module from rest_framework app which actually contains View classes ListCreateAPIView and RetrieveUpdateDestroyAPIView. These view classes provides functionalities for basic CRUD operations. As in our Event app also, we would need to create, retrieve, update or delete the events. CreateView class performs listing all the available events as well as creating any new event while DetailsView class performs Retrieving, Updating and Deleting any event.
In each class, we override the built-in attributes queryset which will be used for returning the object and the serializer_class which should be used for validating and deserializing input, and for serializing output. There are other few attributes and functions that can be overridden according to the requirements. You can find them about here.

  • URLs

As soon as we are done with creating views, the only thing left is to set up urls. Firstly create the urls.py file in our Event app and write following into it.

from django.urls import path
from rest_framework.urlpatterns import format_suffix_patterns
from .views import CreateView, DetailsView

urlpatterns = [
    path('events/', CreateView.as_view(), name='create'),
    path('events/<int:pk>/', DetailsView.as_view(), name='details')
]

urlpatterns = format_suffix_patterns(urlpatterns)

Here we have used as_view() method with class-based views so as to return a callable view that takes a request and returns a response. It’s because we can’t use class-based views just like normal function views. Another thing to mention that we have used format_suffix_patterns. It allows us to specify data format when we use the URLs. It appends the format to be used in the URL of every pattern.

Now next thing to do is to link these URLs to the project level urls.py. In RemindEvent/urls.py write

from django.contrib import admin
from django.urls import path, include

urlpatterns = [
    path('admin/', admin.site.urls),
    path('api/v1/', include('Event.urls')),
]

Here we have versioned our api, hence used v1 in urlpattern.

Now is the time to fire up the browser. But before that let’s create a superuser to login into admin panel.  Just go to the terminal and inside of RemindEvent type

(RemindEvent)~$ python3 manage.py createsuperuser

Once done creating super user, run the server using

(RemindEvent)~$ python manage.py runserver

Now go to the 127.0.0.1:8000/admin and create some events. After it head on to http://127.0.0.1:8000/api/v1/events/

DRFExampleBrowsable API offered by DRF

One of the main advantages of DRF is that it provides the browsable interface for the testing of API. As you can see once we go to the endpoint /events/ it list out all the available events and also provides the interface for creating a new event using POST http request.

Now go to the http://127.0.0.1:8000/api/v1/events/2/

DRFExample2

At the endpoint /events/2/ we can see all the information regarding the event with id 2. It also provides interface for updating and deleting the event using PUT and DELETE http requests respectively.

Isn’t it amazing! I feel it’s awesome.

And here we reached the conclusion of this blog post. It was just an introduction and there is more to know about it. We still haven’t performed any authorization or authentication as to who can access our api or generating tokens to track its users. As I said, it’s just a small introduction. I’ll come up with these topics as soon as I learn.

References

  1. https://www.django-rest-framework.org/
  2. https://medium.com/backticks-tildes/lets-build-an-api-with-django-rest-framework-32fcf40231e5
  3. https://scotch.io/tutorials/build-a-rest-api-with-django-a-test-driven-approach-part-1

This is it for now. Bidding you goodbye! Meet you next time.

Till then be curious and keep learning.

by gutsytechster at February 13, 2019 07:06 PM

Shivam Singhal (championshuttler)

2018 Year in Review with AMO

Sibelius Monument, Finland

Each one of us have some goals to complete, things to learn, and places to visit. With the year getting ended, it is time to lock back and see what all did we do for the last 365 days.

Well, 2018 have been a phenomenal year for me. Working with Addons aka AMO Team is where the major part of 2018 was spent. I have learned how to work remotely with a cross-cultural team. I have met some super awesome people like Caitlin , Rebecca and many more. I fixed ~50 Bugs in AMO. I got to meet a lot of great people, built connections and learned things. I am really happy to see few of my goals getting completed. I failed of the things miserably too.

Here is everything I did in the last year.

January

  • Got the idea for create-web-ext — a scaffolding tool for browser extensions.
  • Talked to my mentor Trishul about it.

February

  • Pitched the idea of create-web-ext to Mozilla Addons team and asked to submit it as GSoC Project.
  • Declined as GSoC Project. Decided to go ahead to develop it.
  • Made team with my Mentor Trishul and Tushar to start working on the project.
  • First International Flight to Finland for Methane Hack. Won 1500 Euros.

March

  • Spent many sleepless nights with Trishul, Tushar to work on create-web-ext.
  • Made the prototype of create-web-ext. Trishul pitched it in Addons Show & Tell Meeting. Got good feedback about it 🌟.

April

  • My first code contribution to AMO, a small patch for amo-frontend.
  • Was working on another patch, sadly never completed it, huh.
  • Applied for the Featured Addons Advisory Board. REJECTED 😎

May

  • Fixed 8 bugs in addons-server and amo-frontend
  • Was working on twitter card implementation for addons, sadly never completed it. Felt demotivated so many times due to this bug.

June

  • Send 9 patches to addons-server and amo-frontend. Learned about the css property: word-wrap: break-word;
  • Went to Finland again to OuluHack Hackathon. Won 1000 Euros 💵

July

  • Sent 3 patches in amo-frontend.
  • Made the dropdown on AMO better. Learned about test assertions.

August

  • Fixed 6 bugs in addons-server and amo-frontend.
  • Deployed Static themes on production on AMO Frontend.
  • Learned that RTL means Right to Left and LTR Left to Right.
  • Wrote code in SQL for the first time ever for AMO Server.
  • Gave talk about browser extensions in DevConf
  • Met dgplug members Farhaan, Sayan and many others in DevConf’18.

September

  • Fixed 10 bugs in addons-server and amo-frontend.
  • First patch to Webextensions API.
  • Went through many sleepless night to setup Gecko on my laptop for the patch. Took more than 15 days. 🤓
  • Decided to dual boot with Fedora OS for Gecko.
  • Sat next through to Wifi router for ~8 Hours to setup Gecko.

October

  • Sent 5 patches to addons-server and amo-frontend.
  • Added developer policies in footer of AMO.
  • PyCon India, my 2nd time , which I attended as a volunteer.
  • Met dgplug members again in PyCon.
  • Applied for Mozilla Addons Reviewer. Rejected. Lesson learned — need to work on my JS Skills.

November & December

  • College Exams, practical and lot of college useless stuff.
  • Managed to solve 5 bugs in the mean time only.
  • Joined Featured Addons Advisory Board for next 6 months.

My plans for 2019

  • Helping beginners For 2019, I am looking to help few handful new code contributor to AMO Project because I feel while contributing in code you get to learn a lot of things like how to communicate, code is just one part of it .
  • More patches. I am looking to submit patches to Addon Manager and Webextensions API in Firefox.
  • Eat, sleep, code, gym, repeat. Being a software developer you are most likely to keep sitting on your chair for the major part of your day. This year I want to take out more time for physical activities.

by championshuttler (Shivam Singhal ) at February 13, 2019 05:27 PM

February 12, 2019

Kuntal Majumder (hellozee)

How Design Works?

Note: The title may be misleading, 😛 I started dabbling with graphic design back when I was in 7th grade, that time I saw someone working in Photoshop, probably was extracting a person and putting that extracted piece on top another picture, the wow moment that was and I be like I also want to do that, so got a copy of Adobe Photoshop 7, technically a pirated copy but well, Photoshop 7 was not being sold anymore back then and I was just experimenting with it, so ethically I was like, okay, it doesn’t hurt anyone, let that be so.

February 12, 2019 10:11 PM

February 08, 2019

Bhavin Gandhi

Where is bhavin192?

It’s been nearly a year since I have posted anything on my blog. So, where I was? I was planning to migrate this blog from the WordPress setup to a static site generator. Started doing that, later got busy with other stuff and it kept getting delayed. But I really wanted to write a new blog post once I get the site migrated. Finally I have the blog migrated completely to HUGO.

by @_bhavin192 (Bhavin Gandhi) at February 08, 2019 06:19 PM

February 06, 2019

Jagannathan Tiruvallur Eachambadi

Ansible 101 by trishnag

We had an introductory session on Ansible in #dgplug and these are some notes from the class. 1. Learned about hosts file to create an inventory, https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#hosts-and-groups

  1. Different connections (ssh and local for now). I had also tested it against a server running CentOS.

  2. We then went on to create an ansible.cfg file in the demo directory which takes precedence over the global configuration.

  3. Learned to write a basic playbook which is a YAML file.

    • /bin/echo using shell module

    • ping using the ping module

by Jagannathan Tiruvallur Eachambadi (jagannathante@gmail.com) at February 06, 2019 07:15 PM

February 05, 2019

Shiva Saxena (shiva)

A quick tutorial on Ansible

Hello all! Today, we had an ansible session in #dgplug by trishnaguha. Before the session, I just had an idea about ansible, that it is used in sort of YAML deployment or something. But never really tried it before. It was a nice experience using ansible. Let me give you a quick wrap up of the session.

What is Ansible?

Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.

On a simple note you can automate tasks with ansible 🙂

Read about this tool from Ansible Documentation.

The Tutorial Begins

Prerequisite:

  • GNU/Linux
  • Ansible >= 2.6.0
  • SSH key-pair
  • openssh-server

Step by Step:

1. Run sshd (if it is not running):

$ sudo systemctl start sshd

2. Copy your ssh key to localhost

$ ssh-copy-id <username>@127.0.0.1

3. Run first ansible command

$ ansible all -i "localhost," -c local -m ping

It returned SUCCESS pong

4. Say hello to localhost 🙂

$ ansible all -i "localhost," -c local -m shell -a '/bin/echo hello'

It returned hello

5. Create a directory and go inside it:

$ mkdir demo; cd demo

6. Create a file here:

$ touch hosts

7. Put some content in the file:

 $ echo "localhost ansible_connection=local" >> hosts

This file hosts  is known as inventory.

The way we added localhost in our custom inventory, we call it ungrouped hosts.

See default hosts file of ansible in your system at /etc/ansible/hosts

8. Run ansible using our custom inventory.

$ ansible all -i hosts -m shell -a '/bin/echo hello'

It returned:  hello

9. Edit the inventory now (to make localhost a grouped host).

Put [webserver] (groub label) above localhost ansible_connection=local and the content of hosts file becomes.

[webserver]
localhost ansible_connection=loca

10. Run ansible again using group name.

$ ansible webserver -i hosts -m shell -a '/bin/echo hello'

Till now, all the ansible commands we have used are called ad-hoc commands, which is something that you might type in to do something really quick, but don’t want to save for later.

11. Now have a look at playbook

As Trishna informed us about “playbook” in her own words:

Till now we were passing all operations need to be executed via command line argument. We would not want to run these modules/task as argument every time we want to configure something as it will neither be feasible if we want to execute multiple operations at a time and we want the operations to be saved.

This is where the term “playbook” comes into play. Playbook is a YAML file that contains one or more plays where each play contains target host and performs a series of tasks on the host or group of hosts, specified in the play.

And a bit about modules:

Modules are the programs that perform the actual work of the tasks of a play. The modules referenced in the playbook are copied to the managed hosts. Then they are executed, in order, with the arguments specified in the playbook

Argument -m  in above ansible commands specified the module to use.

12. Create a playbook file (.yml)

$ touch demo.yml

13. Put content in demo.yml (care about indentation)

- hosts: webserver
  connection: local

  tasks:
  - shell: /bin/echo hello

Explanation of the content:

webserver name of group.
– Using the connection plugin we want to communicate with the host.
– The keyword tasks contains the operations that are to be performed on the destination host.
– Each operation <module (shell) with its arguments/options> are called task. We can add multiple tasks under this section.

14. Run playbook

$ ansible-playbook demo.yml -i hosts -v

ansible was the command we were using for ad-hoc commands, whereas ansible-playbook is the command for running playbook.

15. Edit playbook file

Now we tried 2 tasks in playbook, content of demo.yml becomes

- hosts: webserver
  connection: local

  tasks:
  - shell: /bin/echo hello

  - ping:

16. Run playbook again

$ ansible-playbook demo.yml -i hosts -v

17. Create a custom ansible.cfg

Certain settings in Ansible are adjustable via a configuration file: ansible.cfg

Default configuration can be found here: /etc/ansible/ansible.cfg

Let’s create our own custom ansible.cfg

$ touch ansible.cfg

18. Add following contents to ./ansible.cfg

[defaults]
inventory=hosts

Explaination of the content:

[defaults]  is the tag in ansible.cfg file, where we can pass certain configuration for our playbooks.
– Here inventory=hosts means we are telling ansible to use the inventory file “hosts”

19. Run playbook again (Note: we do not have -i hosts anymore)

$ ansible-playbook demo.yml -v

Ansible will always look for ansible.cfg first in the current directory then in the default directory.

20. Read more about ansible 🙂

Useful references

Conclusion

With this, we have reached the end of this post, overall I found ansible to be a great tool in which definitely there is so much to learn. I would like to thank Trishna Guha for giving an amazing session!

Thank you!

See you in the next post 🙂

 

 

by Shiva Saxena at February 05, 2019 04:36 PM

Failed to connect to lvmetad: booting issue

I have gone through this: “A connection to the bus can’t be made”
And this: “ERROR No UMS support in radeon module!”
Now dealing with this: “Failed to connect to lvmetad”
This trilogy has become a funny and an unexpected blog series on “A habit of Learning”. Let’s find out, do I get rid of these booting issues once and for all, or another error is keeping an eye on me? (Noooooo!)

Machine Specs:

https://www.sony.com.sg/electronics/support/laptop-pc-vpc-series/vpcea45fg/specifications

Problem:

* Type: Booting time issue
* Effect: Slow booting
* Error Message:

failed to connect to lvmetad

* Brief Explanation: After upgrading from Ubuntu 16.04 to 18.04 and then updating my graphic drivers. I am getting this error while booting. Making the booting slow.

Cause:

Kernal Bug: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=799295

What is lvmetad?

From man page of lvmetad:

lvmetad is a metadata caching daemon for LVM. The daemon receives notifications from udev rules (which must be installed for LVM to work correctly when lvmetad is in use). Through these notifications, lvmetad has an up-to-date and consistent image of the volume groups available in the system. By default, lvmetad, even if running, is not used by LVM. See lvm.conf(5).

Solution:

This is the key reference I used to resolve this issue and speed up my booting time:

Answer of Shahriar Shovon solved the issue for me:
https://support.linuxhint.com/question/lvm-issue-ubuntu-18-04-failed-to-connect-to-lvmetad/

What I did:

Edit /etc/lvm/lvm.conf file with the following command:

$ sudo nano/etc/lvm/lvm.conf

Now, find the line use_lvmetad=1 and change it to use_lvmetad=0

 

Now, run the following command to update the initramfs file for the new kernel:

$ sudo update-initramfs -k YOUR_KERNEL_VERSION -u

$ sudo sync

Command to update initframs may differ for different distros. And to get my Kernal_version I just pressed tab during the command $ sudo update-initramfs -k <tab> and available kernal version appeared. And I selected the latest one.

Reboot and you are good to go!

Conclusion:

I just disabled that service (the best I could do) to get rid of this issue.

After this, I was able to boot without lvmetad error. But wait…
Not again! Still the booting is slow (~2 minutes) and it stuck at line for around 15-20 seconds that says:

Scanning for btrfs file systems

I found this solution: https://unix.stackexchange.com/questions/78535/how-to-get-rid-of-the-scanning-for-btrfs-file-systems-at-start-up, but never used it. I am okay as far as I am not getting any error, failed type of words in booting. So that’s it. All okay now! 🙂

What you think? Should I remove btrfs-tools from the system? Let me know in the comments section below.

I hope this post will help someone.

Thanks for reading!

by Shiva Saxena at February 05, 2019 10:45 AM

[drm:radeon_init[radeon]] ERROR No UMS support in radeon module!: booting issue [solved]

This post is in continuation with my previous blog post about “A connection to the bus can’t be made: booting issue”. So, as soon as I got rid of my prior booting error I got another one. But this time it was more specific (related to radeon device) and easy to find out the solution over web. Here is what worked for me.

Machine Specs:

https://www.sony.com.sg/electronics/support/laptop-pc-vpc-series/vpcea45fg/specifications

Problem:

* Type: Booting time issue
* Effect: Slow booting
* Error Message:

[drm:radeon_init[radeon]] ERROR No UMS support in radeon module! [Solved]

* Brief Explanation: After updating my <GNU/Linux>. I am getting this error while booting. Making the booting slow.

Cause:

After going through this, I found out that the reason is with graphic drivers. I didn’t follow the solution listed there but did something more straightforward.

Solution:

Install/update/upgrade graphic drivers. That’s it! I followed the instruction of Joshua Besneatte in this answer https://askubuntu.com/questions/1066105/how-to-install-amd-graphic-drivers-on-ubuntu-18-04 which are as follows:

sudo apt update
sudo apt upgrade
sudo apt autoremove
sudo apt autoclean

Now, add the AMD updates PPA and update:

sudo add-apt-repository ppa:oibaf/graphics-drivers
sudo apt-get update
sudo apt upgrade

Then reconfigure your packages to be safe:

sudo apt install --reinstall xserver-xorg-video-amdgpu
sudo dpkg --configure -a
sudo dpkg-reconfigure gdm3 ubuntu-session xserver-xorg-video-amdgpu

Now simply reboot. It worked for me. 🙂

Conclusion:

I believe that getting the updated graphic drivers is the permanent solution to the error discussed in my previous post. Because somewhere I knew from the start that the problem is associated with GPU drivers. Because I was able to boot perfectly with Linux Mint in compatible mode (that is without GPU acceleration).

I gone through 2 booting issues one after another. Still, I don’t know why these errors were attacking me back to back, one by one, just message kept on changing. Because still, I was not getting clear and fast booting. Quiet hilariously, after resolving this issue, again I was dealing with another one, and this time the message was:

failed to connect to lvmetad

Solution in the next post.

I hope this post will help someone and may they not get into the next error in queue like me. But if you know any other solution regarding the same issue, please do write in the comments section below, that would be helpful for someone else.

Thanks for reading! 🙂

by Shiva Saxena at February 05, 2019 10:32 AM

A connection to the bus can’t be made: booting issue [solved]

Hello and welcome back to “A Habit of Learning”! Couldn’t write for so long due to repetitive health issues then exams then an exciting chess tournament then a bit of LaTex and cookiecutter like tools and then here I am. Before I write about anything else of what all I have been going through these days, I feel there is a need of THIS blog post as I couldn’t find many solutions over the web regarding the issue.

Recently, I went through this issue while installing Ubuntu 18.04 in my relative’s laptop.

Machine Specs:

https://www.sony.com.sg/electronics/support/laptop-pc-vpc-series/vpcea45fg/specifications

Problem:

* Type: Booting time issue
* Effect: Unable to boot completely
* Error Message:

(gvfsd-metadata:743): GUdev-CRITICAL **: 00:18:28:319: g_udev_device_has_property: assertion 'G_UDEV_IS_DEVICE (device)' failed
A connection to the bus can't be made

* Complete error: Image shown below [while trying Linux Mint (same issue was there in Ubuntu 16.04 and 18.04).

connection_to_bus

* Brief Explanation: Tried to boot my <GNU/Linux distro> and initial <seconds> appear to be normal booting. Then comes a black screen and nothing happen thereafter except a message shown repeatedly: “A connection to the bus can’t be made”. I waited for around <minutes> but system was unable to boot completely.

Cause:

Can’t say what the actual reason might be. But as far as I searched over the web this could arise due to dedicated GPU your machine has (at least it was THE CAUSE in my case).

Scenarios:

There are 2 possible scenarios I have experienced which are as follows:

1. In booting while trying Ubuntu 18.04, Mint – Cinnamon with Live USB/CD
2. In booting after installation.

One of my friend also had the same issue while shutting down his Ubuntu 18.04. :p

Solutions:

A couple of solutions I tried to get rid of this issue, I can’t say which may work for you 🙂

One of my friend solved this issue after upgrading his OS (from Ubuntu 16.04 to 18.04).

So I tried it first. I installed Ubuntu 16.04 and then upgraded it to 18.04. But that didn’t work instead became the cause of error discussed in my another post. All solutions listed below and in successive posts of this series are done on Ubuntu 18.04.

1. Arrow Keys [Before/After Installation]

Appear to be silly, but worked for me 🙂
After 5-10 seconds of normal booting, I tried to hit arrow keys up, down, left, right – repeat 2-3 times and 2-3 times pressing Enter key (More than a solution it appears to be an act of frustration, which exactly it was). And viola! My laptop booted completely after 30-70 seconds. But problem persisted in next boot. So, it is not a permanent solution but surprisingly may work as a temporary work around.

This ^^ is not be the best solution. Of course. Let’s see some other alternative that worked for some people.

2. Setting Nomodeset [Before/After Installation]

The reason being that

nomodeset

The newest kernels have moved the video mode setting into the kernel. So all the programming of the hardware specific clock rates and registers on the video card happen in the kernel rather than in the X driver when the X server starts.. This makes it possible to have high resolution nice looking splash (boot) screens and flicker free transitions from boot splash to login screen. Unfortunately, on some cards this doesn’t work properly and you end up with a black screen. Adding the nomodeset parameter instructs the kernel to not load video drivers and use BIOS modes instead until X is loaded.

source: https://ubuntuforums.org/showthread.php?t=1613132

This solution also worked for me. To setup option nomodeset there are 2 cases:

1. While trying OS with Live USB/CD

In this case while in booting menu: Press F6 and choose “nomodeset”. And you would be able to boot properly. I did this and installed the OS with a hope that the issue will go away after complete installation. (but it didn’t)

Screenshot_2019-02-04 mostrar-opciones-arranque-ubuntu-li png (PNG Image, 465 × 308 pixels)

2. While booting after installation.

As written in this link https://askubuntu.com/questions/38780/how-do-i-set-nomodeset-after-ive-already-installed-ubuntu/:

  • While booting press –> Shift (to go to grub menu)
  • While in boot menu –> Press ‘e’
  • Find the line start with `linux`
  • Replace -> “quiet splash” with “nomodeset” or add “nomodeset” before “quiet splash”
  • Press –> CTRL + X to boot

Once the booting is completed you need to setup this “nomodeset” permanently in your grub configuration using the instruction of Coldfish in this answer https://askubuntu.com/questions/38780/how-do-i-set-nomodeset-after-ive-already-installed-ubuntu/ which are:

sudo vim /etc/default/grub

and then add nomodeset to GRUB_CMDLINE_LINUX_DEFAULT:

GRUB_DEFAULT=0
GRUB_HIDDEN_TIMEOUT=0
GRUB_HIDDEN_TIMEOUT_QUIET=true
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nomodeset"
GRUB_CMDLINE_LINUX=""

And then save and exit :x , then simply run:

sudo update-grub

Now, reboot and you are good to go. This also worked for me. But still, 🙂 nomodeset is a temporary solution which simply avoids the cause of solution. It doesn’t solve the cause in itself.

2. Update Distro [After Installation]

Simply run:

sudo apt update
sudo apt upgrade
sudo apt dist-upgrade
# And just to avoid any doubt
sudo apt full-upgrade

I did all 3 solutions listed above. And finally, I was not getting the error “A Connection to the bus can’t be made” in successive booting.

Conclusion:

I knew it, that “nomodeset” is just a temporary solution. So, I tried to boot again only with default options i.e “quiet splash”. And yes, I didn’t get the previous error line! (might be during the update the problem got solved)

But for me, it didn’t come with a clear win because now, I was dealing with the second issue in line. And this time the error was:

[drm:radeon_init[radeon]] ERROR No UMS support in radeon module!

With an hour of research I was able to solve this error as well. And I think the solution to this new error is the permanent solution of that previous one. Why do I think so, and what is the solution to this new error? Will soon write my next post unveiling the same.

I hope this post will help someone. And if you know any other solution regarding the same issue, please do write in the comments section below, that might also help someone else.

Thanks for reading!

 

by Shiva Saxena at February 05, 2019 10:12 AM

February 03, 2019

Kuntal Majumder (hellozee)

Enough of Youtube

“Youtube”-ing seems like a trendy hobby for most of the people of my age, vlogs seem to be taking over blogs but I don’t have the required setup to record videos nor the time to invest in making videos.

February 03, 2019 03:16 PM

Piyush Aggarwal (brute4s99)

Multi-booting

INTRODUCTION

I am an Arch Linux user by day, but recently I needed constant access to Windows 10 OS to develop KDE Connect - an awesome project by some smart-working developers from across the globe, for Windows.

While working with the team, I also had to install Ubuntu to test a new release for the Ubuntu users.

All this boils down to a system that already contains Arch Linux, to house Windows 10 and Ubuntu along, on a 500GB hard disk.

I have also mentioned a rookie mistake in this blog post, so do take it with a pinch of salt.

CHALLENGE 1: One storage device, many partitions

The thing is, there are many partitions required to have such a system, and legacy partitioning systems allow for just 4 at max. Enter UEFI, that allows any number of partitions on a single storage device.

STATUS: Arch Linux `OK` ; Ubuntu `TO_BE_INSTALLED` ; Windows `TO_BE_INSTALLED`

CHALLENGE 2: Getting Windows 10 media to boot in UEFI mode

For this, I used Rufus to create my installation media, and supplied the latest Windows ISO recieved from the Media Creation Tool provided by Microsoft.

Luckily, Windows installed itself nicely along with Arch Linux, and I was able to dual boot just fine after the installation, with GRUB2 from Arch Linux.

STATUS: Arch Linux `OK` ; Ubuntu `TO_BE_INSTALLED` ; Windows `OK`

TRIPLE BOOT TIME!

I went for Ubuntu 18.04 LTS here because it was the latest edition with LTS.

I simply installed it on a separate ext4 partition at the end of my HDD (using that something else option).

I’m not sure what happened here, but it might have did something to my prior GRUB config managed by Arch Linux.

On next boot, the GRUB settings of Arch Linux showed up, which had options for Windows and Arch Linux.(no Ubuntu OS here)

CHALLENGE 3: Get Ubuntu OS to boot

Then I went ahead and booted into Arch to run a grub-mkconfig -o /boot/grub/grub.cfg, because it didn’t know about Ubuntu OS. After I rebooted the system, Ubuntu’s GRUB config greeted me, that did not have Arch Linux as a boot option.

I lost access to Arch Linux now. I was not happy, to say the least.

STATUS: Arch Linux `NOT_BOOTING` ; Ubuntu `OK` ; Windows `OK`

CHALLENGE 4: Get Arch Linux to boot

Next, I tried running the same grub-mkconfig -o /boot/grub/grub.cfg in Ubuntu OS.

I got options for Arch Linux then, but they didn’t work for me (poor Arch support in Ubuntu 18.04?).

Then I fired off an Arch Linux Live USB and decided to try to get GRUB reinstalled from my Arch Linux installation.

  • re-formatted the /dev/sda1 (EFI) partition.
  • arch-chrooted into my Arch installation and force-reinstalled all my arch linux packages by my previous post.(to get linux firmware images in /boot).

I could’ve done it by reinstalling just the firmware too, as <Namarggon> on #archlinux (IRC) suggested.

  • ran grub-install and grub-mkconfig commands from my GitHub gist - ARCH COMMANDS
  • ran genfstab command from that GitHub gist.

(kudos to <GreyShade> and <iovec> for helping me out on this one!)

I have access to my Arch Linux and Ubuntu now.

STATUS: Arch Linux `OK` ; Ubuntu `OK` ; Windows `NOT_BOOTING`

UPDATE:

took a couple of commands: bootrec /fixmbr and bootrec /rebuildBCD from a Windows OS Installation Media. They installed the new EFI files in the EFI partition, and I finally had access to all three systems! \o/

STATUS: Arch Linux `OK` ; Ubuntu `OK` ; Windows `OK`

CONCLUSION

I obviously should not have removed the EFI partition, since that step increased the work needed to set up other OSes. If you happen to find any other weak links or better procedure, please do share it with me over the mail or twitter!

Stay safe and make the internet a healthier place!

February 03, 2019 11:31 AM

February 02, 2019

Kuntal Majumder (hellozee)

A Couple of Words

Speaking from a normal person’s point of view, do praises hurt? or if I can phrase it better, what hurts more, praise or criticism? An appropriate answer would be none, if you can take praise with your head held high you must be able to take criticism with the same attitude, right?

February 02, 2019 05:41 PM

January 29, 2019

Piyush Aggarwal (brute4s99)

Contributing to pandas

pandas
pandas: powerful Python data analysis toolkit

for PyDelhi DevSprint 02/02/19

pre-DevSprint reading material:-

Homework

0. Remove existing pandas installation

```
pip uninstall pandas
```

1. Fork me!

2. Clone the fork to your PC.

3. Install pandas from source.

  • cd into the clone and install the build dependencies.

    python -m pip install -r requirements-dev.txt
  • Build and install pandas. (takes ~20 minutes on an i5 6200U with 8GB RAM)

    python setup.py build_ext --inplace -j 4 
    python -m pip install -e .

Background

Work on pandas started at AQR (a quantitative hedge fund) in 2008 and has been under active development since then.

Chat with more pandas at Gitter.im!

Some Tips

Bad Trips

I accidentally rebased on origin/master. That was ~350 commits behind upstream/master !

Steps taken:-

  • reverted HEAD to just before rebase
  • merged upstream/master into origin/is_scalar
  • updated origin/master to get NO diffs in upstream/master and origin/master
  • ran git rebase origin/master and fixed a conflict in doc/source/whatsnew/v0.24.0.rst
  • pushed to origin/is_scalar.

Stay safe and make the internet a healthier place!

January 29, 2019 07:31 PM

January 27, 2019

Pradhvan Bisht (pradhvan)

Making things count

By the end of last year, I graduated and as I like to call it my life’s free trial ended 😛 and starting this week(21Jan,2019). I had my first day of work. Simple things have been complex in the past few months but I guess I survived with a lot of help thanks to good people around me.

So it all started by the last semester by the starting of Feb 2018, I had done a couple of bad interviews and even if the interviews were good I wasn’t confident that I would fit in. Maybe it was imposter syndrome or something else I don’t know. Later by graduation, I ended choosing unemployment and giving some more time to just code random stuff. 😛 The main reason behind writing this blog is to give you the up’s and down’s so you can get a reality check of what’s actually like because I have been talking to some of my college juniors who are now I in the same phase where I was one year back.

Just a bit of background on me to get things very clear from the start, I started coding in Python seriously a bit late like by the start of my third year in college, by seriously I mean like coding daily or maybe looking up small patches in upstream open source projects. I had been active in the local meetup group PyDelhi, ILUG-D and had been introduced to #dplug recently so I wasn’t complete noob in the world of open source tech in general. To put it nicely I was LAZY, I am not proud of it but yeah <pip install regret >.

So the journey started after college by the start of the first of August, I had some family problems in July so getting used to it back home took some time. I had read a lot of blogs about people taking a break and learning to code but most of them were about someone who had not coded in their like and in coming six months taught themselves to code and got a job so could not relate to it. So starting the first two weeks I use to check out the syllabus of coding boot camps and roadmaps for becoming a backend engineer, so here is my first mistake that someone should really avoid while traveling on the same road because once a wise friend of mine said

Life is too small to make all the mistakes yourself sometimes it’s best to learn from others mistakes.

So coming back to the point,

Talk to people even if don’t want to:  I was kinda lucky to have got a college in Delhi because Delhi has some awesome tech communities but when college ended I went back home, Nainital and was missing the meetup culture. I would still talk to people #dgplug, thank god it’s an online community otherwise I would have been completely lost. Even though I use to talk to people I did not actually use to ask for help in figuring things out maybe it was just what these people would think or maybe it was that people would ask me not to do it and get whatever job I was offered ASAP. This changed when I read 6 Bags and a Carton   by very own @fhackdroid . 😛 It was that moment I thought I am lucky and this wasn’t a bad idea. Later when I was staying with him, Sayan, Chandan, Devesh, Rayan during the time of PyCon India 2018 I opened up on what should I do make the most of the time and he with Sayan helped me a lot to make things clear, what should I be focusing on , what projects can I do and also suggested to read resource centre blog of people who took a break and went to Recuruce Centre to work on their tech skills.

So the point I am trying to make is you should talk to some people around in community even before you start planning that would eventually help a lot in making a concrete roadmap for the next x months you take because you would fail a lot, will be diverted and trust me even sometimes question  that you took the right decision or not. During those times it’s best that you have some experienced people helping you out and what I think is if you have concrete roadmap these feeling will be shrunk to a size where you can just ignore and work.

Time is money: I initially planned for an entire year! God knows what I was thinking. This could be because of the first problem I mentioned of not talking to people, but Yes unless an until you have a job waiting for you or you want to focus on something a subpart of a particular topic trim down the time because it takes time to get stable after your gap. In my honest opinion, three months are more than enough unless an until you’re just switching to learn to code and have not written a single line of code or know nothing about it.

Blogging to success: At first, I use to think that what’s the point of blogging when I see all the awesome blogs out on the internet which would be far better than me. But later I realized it’s not about your blog being the best it’s mainly about consistency because what I think is blogging helps in two ways:

1. It helps you structure your thoughts that you can explain to ‘n’ number of people easily.

2. For all the research you do behind the blog you get to learn a lot and that learning sticks for a long time plus you get a backup of your notes to look back.

I would recommend to someone who is talking his off to code/learn/hack/ build silly stuff to at least write a blog once three days.

Document the shit out of it: I was heavily inspired by OBM so I started documenting my daily working habits, I did set goals and future week goals but the problem I faced was I over-engineered the shit out of it which at some point became tedious. I maintained a bullet journal with waka time installed in sublime text to track my coding timings and also was doing short sprints of 25 min each every time I use to sit to code or read. Things got to a downfall pretty quickly because it required a lot of effort to just maintain the whole workflow, I am not saying it’s bad but it wasn’t for me. I did a lot of iterations of the whole bullet journals and found the simplest one to be easy to maintain and easy to follow.

So I would say don’t get sad and totally give up the idea if you’re not able to follow up the whole idea of documentation, just keep evolving the process until it suits your needs because this will definitely help you realize how much effort have you put in the past and how much you have to put in. This trusts me helps in times when you feel like you’re not doing enough work or you made a wrong decision.

One last thing. All the very best, if you’re talking that road just remember to work hard and things will eventually plan out. If it did for someone like me who had no clue, 😛 you at least have a heads up on things.  🙂

Finally, things wouldn’t have been the same without the help from #dgplug and I definitely owe a lot to them.  The people in the community are always ready to help you in a correct manner not spoon feeding you but making you independent. I got a lot of inspirations from different people in the community to work hard , I hope I can follow the same footsteps 🙂

 

by Pradhvan Bisht at January 27, 2019 06:02 AM

January 26, 2019

Prashant Sharma (gutsytechster)

What are APIs?

Howdy fellows! What’s up?
So, I wanted to start with REST API framework offered by Django. But before I moved any step forward, I realized that I don’t know what an API is. And that’s where I went into the world of API. Yes, there is an other world of APIs where they do everything in their own way, they talk, they walk their own way. They don’t speak languages like we do. They are more technical in that case. They speak in terms of request and response. But just wait, before we go any further, let’s discuss everything little by little.

Let’s start with its full name. The term API stands for Application Programming Interface. Now, I am gonna put quite common analogy for it, though the best, maybe that’s why it is often used to make people understand about APIs. So, when we use our mobile phone or smartphone, we use the interface provided by the hardware in our hand and we can make it do anything. Can’t we? Of course we can. Through that interface we can talk or simply interact with our mobile phone. In similar terms when one software wants to interact with another software, they do it using APIs. APIs are the interface for them. Hence the term signifies.

When we talk about APIs, we often talk about two API paradigms.

  • SOAP
  • REST

We’ll try to understand a little about both of them

SOAP

SOAP stands for Simple Object Access Protocol. As I already mentioned, the application interact with each other using API as an interface in terms of request and response. You send a request to an API to fetch some data and give it back to you in terms of response. That’s the very foundation of how we use API.

soapA SOAP request and response example

Credits: CodeProject

SOAP uses XML notation to format request and response. It provides a higher security as compared to REST. It necessarily needn’t to be used over HTTP eg it can be used over SMTP also. It uses mainly two HTTP verbs – GET and POST. GET for retrieving any data and POST for adding or modifying the data.

REST

REST stands for REpresentational State Transfer. Here the request and response are usually formatted in JSON. Though it can process any of  XML, HTML or JSON. Since JSON is quite easily understandable, it is preferably used. It is built over HTTP ie it can perform all the CRUD operation using different HTTP verbs.

 HTTP verb         CRUD Operation

  1. POST                         Create
  2. GET                            Read
  3. PUT                            Update
  4. PATCH                      Update
  5. DELETE                   Delete

Other than this, REST is made for web as it uses URI(Uniform Resource Identifier) and HTTP.  Consuming an API is as simple as making an HTTP request in REST.

restA REST request and response example

Credits: Cisco Learning Labs

In REST, we give a request to an endpoint and get a response in return. An endpoint is one end of a communication channel. Each endpoint is a location from which API can access the resources they need to carryout operation. Each response contains a status code representing the status of request. A valid request gives 200 OK status and an invalid request may return a 404 NOT FOUND error. You can find a whole list of these status codes here.
While working with APIs, you often come across the term ‘payload’. Payload in programming means the relevant information or data. In APIs, when we talk about payload, then we refer to the data we receive apart from other meta-data like content-type headers. As you may notice in the image above, the response contains the payload as well as other data which is referred to as Response headers. These headers are the meta-data which tells us about the nature of request and response.

Most of the APIs are free to use like Google Map API but till an extent ie they put some restriction on the use of these APIs. Many APIs provide an authentication process to keep track of usage of APIs. The service providers provides an authentication key aka API key. These keys provide a way to identify the root of the request. APIs can also be used as business product.
Though the question of which API to use really depends on one’s use case. To know more about differences between these two, you might want to have a look at this.

References and Further Reading

  1. https://learninglabs.cisco.com/lab/what-are-rest-apis/step/1
  2. http://www.soapuser.com/basics1.html
  3. https://www.upwork.com/hiring/development/intro-to-apis-what-is-an-api/

That’s all for now. It was a great experience learning about APIs and to know such a great thing. I hope it would be the same for you. Thanks for stopping by this post. If you find any mistake or any suggestion regarding this, feel free to comment in the section below. Meet you next time.

Till then, Be curious and keep learning!

by gutsytechster at January 26, 2019 07:14 PM

Piyush Aggarwal (brute4s99)

Breaking Free

PRIVACY
Free as in freedom

INTRODUCTION

When I started using this static blog, little did I know of all the trackers that came with the supporting resources used in a cool starter like this one. In this post, I explain the various kinds of trackers and also a personal teaspoon of what trackers I dealt with, while sterlising this blog!

How does browser tracking work?

When you visit a website, third-party trackers (cookies, pixel tags, etc) get stored on your computer.

How many trackers exist in any given website depends on how many the website owner has decided to include. Some websites will have well over 60 trackers, belonging to a multitude of companies, while others might have only one - perhaps to track visitor numbers, or see where these visitors are coming from, or to enable a certain functionality. Some might have none at all.

Not all trackers are necessarily tied to companies tracking your browsing habits - but when you accept cookies, you’re saying ok to all the trackers that are there - including those feeding info back to companies.

What is being collected and Why?

Trackers collect information about which websites you’re visiting, as well as information about your devices.

One tracker might be there to give the website owner insight into her website traffic, but the rest belong to companies whose primary goal is to build up a profile of who you are: how old you are, where you live, what you read, and what you’re interested in. This information can then be packaged and sold to others: advertisers, other companies, or governments.

They are also joined by more well-known companies. Some of these are even visible: Google’s red G+ button, for example, is a tracker; Facebook’s “like” thumb is a tracker; and Twitter’s little blue bird is also a tracker.

Why does it affect me?

Data companies and advertisers also know which articles you read and which ones you skip, which videos you watch, and which ones you stop after 5 seconds; which promotional emails you read, and which ones you send to your Trash folder without opening; what you like on Facebook, what you retweet, what you heart on Instagram.

When you put all these things together, you end up with your own unique online fingerprint — which immediately identifies you, with all your likes and dislikes and personal traits

And that’s potentially very bad news, because once they know exactly who you are and what makes you tick, companies and advertisers can:

  • spam you with finely-tuned, targeted ad campaigns that follow you around the web.
  • potentially jack up their prices for you.
  • invade your privacy and chip away at your anonymity online, which nobody likes.

Web Trackers An illustration from a post by Princiya
She writes on awesome topics at FreeCodeCamp, you should check out here posts!

Tracking mechanisms

Cookies

Cookies are the most widely known method to identify a user. They use small pieces of data (each limited to 4 KB) placed in a browser storage by the web server. When a user visits a website for the first time, a cookie file with a unique user identifier (could be randomly generated) is stored on the user’s computer.

Subsequent visits to the Facebook page do not require you to login, because your details will be remembered by the browser through a cookie stored during your first login.

Browser fingerprinting

Browser fingerprinting is a highly accurate way to identify and track users whenever they go online. The information collected is quite comprehensive, and often includes the browser type and version, operating system and version, screen resolution, supported fonts, plugins, time zone, language and font preferences, and even hardware configurations.

These identifiers may seem generic and not at all personally identifying. But, typically only one in several million people have exactly the same specifications as you.

Web beacons

Web beacons are very small, usually invisible objects embedded into a web page or email. Web beacons are also referred to as “web bugs,” which also go by the names “tags,” “page tags,” “tracking bugs,” “pixel trackers,” or “pixel gifs.”

In their simplest form, they are tiny clear images, often the size of a single pixel. They download as an image when the web page is loaded, or the email is opened, making a call to a remote server for the image. The server call alerts the company that their email has just been opened or their web page visited. This is why you should not display images in emails from senders you do not trust.

Web beacons are also used by online advertisers who embed them into their ads so they can independently track how often their ads are being displayed.

The Anonymization Myth

Most companies claim that they don’t identify you by name when they hand over a profile of you - but what does that really mean, when you can be identified easily through all the other information included?

Here’s a good read on anonymization.

Protecting your-self

While companies (sometimes) allow users to take away their data off the company servers (for eg: Google TakeOut and Facebook), one can never be sure if that is the real deal or not. Companies might be still be retaining derivatives or seemingly “anonymous” attributes from user data. As such, it’s always a better move to restrain giving away information as much as possible. Some ways are discussed below.

  1. Use browser add-ons.

    Many add-ons like Privacy Badger from EFF allow for users to take a look at all the third party trackers enabled by the website’s owner, and disable them.

  2. Use Tor or a VPN.

    If you connect to the Tor anonymizing system, or use Tor’s browser, your ISP will only know that you have connected to Tor; from there it loses the data trail. Of course the downside to this is that your browsing will be slower.

    Be aware, your unencrypted traffic to websites outside the Tor network passes through a complete stranger’s exit node: the person running the exit node can watch what you’re doing. All you’ve done is move from your ISP snooping on you to an exit node admin watching you. On the other hand, you’ll cycle through different exit nodes, so it’s harder to be identified and tracked by websites outside the Tor network.

    A virtual private network is an alternative that will work for lots of people, especially if your work has a VPN service that you can use for free. This again will cut off your ISP’s ability to see what you are doing.

    But do some research on your VPN provider. Do NOT use a free VPN provider because they face even stronger financial temptations to sell your information. If you use a VPN, you are effectively giving that company the same level of insight into your online life as your ISP. So pay for one, and check out their policies on what they do with the data they build on you.

  3. Use a different search engine.

    Google offers a wonderful service, but everything you type in its search box is logged and connected to you in as many ways as possible. It is then sold on.

    So why not use a different search engine? DuckDuckGo is an awesome search engine with NO user data logging.This Qoura answer tells more about features of DuckDuckGo.

Getting rid of some trackers from your site

  1. ajax.cloudflare.com

    inherent on websites hosted by Cloudflare’s DNS.

  2. graph.facebook.com

    active when Facebook’s developer services (for eg: FB Comments plugin) are loading on a webpage.

  3. clients6.google.com

    active when webpages directly call Google servers for Javascript codes.

  4. fonts.gstatic.com

    active when Google fonts are called for CSS scripts.

  5. www.linkedin.com

    active when there are links to linkedin in the webpage.

Tracking the trackers

image

Lightbeam from Mozilla is privacy browser extension helps you discover who’s tracking you online while you browse the web.

You can get it here.

Some links

January 26, 2019 04:01 PM

January 25, 2019

Sehenaz Parvin

The Open superstition

I have a grave question about a fact:

Why do parents blame the teachers when the student scores a low grade and simultaneously why do parents congratulate their kids when they score a good percentile???

I don’t want to hurt anyone’s interest. I just want to put forward that in both the cases the student and the teacher are equally contributors. The difference is in this fact that in first case the effort of the student is less and in the second case both are equally contributing to the system.

Am I right??? We should never blame the teachers for your own mistakes. They are the guiders of our life. So , I think next time before blaming the teachers we should think twice about it. It’s actually destroying the psychological mentality of a kid about a teacher.

And same case goes for the students also. Never blame your teachers . First think about your own mistakes then go to any conclusions.

by potters6 at January 25, 2019 01:45 PM

January 20, 2019

Jagannathan Tiruvallur Eachambadi

Thoughts on Atomic Habits Commentary by Jason Braganza

Original post at https://mjbraganza.com/atomic-habits/

  1. Schedule and structure make or break the plan. Goals only show the direction of the task. Personally I would say this has brought all change I need.

  2. Answer in the affirmative. You don’t try quitting smoking, you don’t smoke period. Personally I don’t identify as someone who can’t eat butter or meat but as someone who won’t. This confirms my resolve in what I believe to be done.

  3. Environments are inherently associated with specific habits. I go to the department to work to make it possible to just concentrate on the task at hand instead of procrastinating. This has worked really well and it can be further improved but it is much better than being at home.

  4. Jason mentions a more important point that I had realize earlier but failed to follow through. It is to always repeat and practice something even if one is not good at it. We can always improve on the parts of the task that are lacking rather than ditching the whole task.

  5. I have made running more enjoyable by running with an acquittance and making them a friend. It is more interesting to interact with someone you don’t see everyday.

  6. I will just leave the quote here, “Never miss twice. Missing once is an accident. Missing twice is the start of a new habit.”

I think most of it boils down to building an identity and keep improving it to best serve our needs. For me, it is a matter of compounding the effort put in building up a schedule this month to make it smoother in the coming days. As a nice side effect I am getting 6km of biking done everyday for free :)

by Jagannathan Tiruvallur Eachambadi (jagannathante@gmail.com) at January 20, 2019 07:31 AM

January 19, 2019

Kuntal Majumder (hellozee)

Caring about the directories

The story begins in PyCon India 2018, I, with a tired soul (registration desk is not a quite nice place for resting), shivering in freezing cold, after a dosage of Paracetamol, asked fhackdroid to tell me about any project which uses Go, so that I can also put some patches during devsprint.

January 19, 2019 02:35 PM

January 14, 2019

Piyush Aggarwal (brute4s99)

Arcanist

Phabricator
Phabricator

INTRODUCTION

This post is dedicated to Arcanist, a command-line interface to Phabricator. Phabricator is a set of tools that help KDE build better software, faster.

Various command-line based solutions out there help developers to acheive good workflow across features and projects(Git, Mercurial et al); Arcanist takes the same approach, but feels a lot more practical to me.

Arcanist User Guide states thus:-

Arcanist provides command-line access to many Phabricator tools (like Differential, Files, and Paste), integrates with static analysis (“lint”) and unit tests, and manages common workflows like getting changes into Differential for review.

Setting up Arcanist

The two dependencies to Arcanist are - git and php. Install them using sudo pacman -S git php (or equivalent for your distro).

Then you can install Arcanist itself. It was as simple as yay -S arcanist for me. arcanist_install Other distros’ users may want to look for Installing Arcanist subsection in Arcanist quick start.

Next up, get the source code of the project you wish to work on by cloning it from cgit.

Now then, let’s dive into development! 🤖

Development with Arcanist

  1. You may find an interesting bug from KDE’s bug tracker or task from your project’s Workboard.

  2. Always create a feature branch/ bookmark before touching any file in a clean clone. Use arc feature for it.

    arc feature name_of_feature_branch
  3. Poke around, play with the code, do your thing.

  4. When ready to submit a patch, type in arc diff. This will also help you maintain your submitted patches. Complete the following forms, that look like:- arc diff

  5. That’s it! Your patch is submitted for review! You also get a link to share with others and see how the submission looks on Phabricator!

  6. Continue hacking on another bug or Task, and wait for the review on the submiited patch!

Remember to make a different feature branch beforehand!

Tips

The world is not perfect, and many-a-times the reviewers will suggest changes to the patch before flashing the green light. Just revisit the branch, do the changes required, and hit arc diff again!

If you’re not sure about this, use arc diff --preview. I always use it before associating a diff with a submission! 😉

arc patch

You can always try out any submitted patch along with the latest master by using the arc patch command!

arc patch D18812

This command will do the following in definite order:-

  1. create a new feature branch with name arcpatch-D18812.
  2. apply patch D18812.
  3. set local tracking to the local branch arcpatch-D18812.
  4. checkout arcpatch-D18812 feature branch.

Don’t worry, even if it’s an old patch, Phabricator remembers the master branch commit the current patch was based on! As an example:-

If you pull a particularly old patch, say D16553, I get a branch based on commit 657dec, whereas the current HEAD of master is 708bcb !

arc feature

Suppose you were at master in your clone, and you do arc feature some_name. Now, some_name branch will be set to track the local master, as in if you commit anything to just the local copy of master that you have, and then git checkout some_name, git will ask you to perform "git pull" as your current branch is behind by some commits.

TL;DR

doing git pull in some_name will import the changes from the last branch you checkouted, before arc feature some_name.

arc land

Perform arc land after you have completed the following checklist:-

  • Your submitted patch has been accepted by reviewers.
  • The reviewer(s) have EXPLICITLY tasked you to land the patch.
  • You do have a Developer Access Account in order to land the patch.

arc land automatically rebases (and errors if that failed), so you don’t have to do that manually, unlike Git.

This quickstart should be enough to get you started on KDE's Phabricator and setting sail on some binary adventures!

January 14, 2019 08:01 PM

Prashant Sharma (gutsytechster)

It’s never too late!

Hey there everyone! It’s me again after a long gap.

So, I’ve been busy or you can say lazy to write about anything. Maybe because I didn’t learn anything significant throughout that time. I won’t say that I didn’t learn anything at all. I was going through blogs, articles or different things but couldn’t take out time to write about those. However, I realize the cause that was hindering me to write blogs. It was being perfect about what I am gonna write about.

Even though I tried to continue writing when continuing with #dgplug sessions but couldn’t continue with that habit as it ends. Maybe because, I wasn’t able to develop a habit at all. Maybe that was just a periodic habit or I just started procrastinating. I usually come across some articles or blog posts of different people which inspires me to do something. This time there was a article of one of the #dgplug folks which I came across when I was going through its students planet. This is the article written by Pradhvan. It really inspired me. I felt it to be the same what happens with me.

A small note about my learning

  • I started reading about Javascript from MDN. I was partially familiar with it but I wanted to know more.

  • I am planning to start studying DRF – Django REsT Framework in order to build skills for working with API and since I have done basics in Django, it appeals me more towards itself.

Currently, this is the only thing. Though I’ll keep updating as I learn anything new across my journey. I am writing this blog just to make a public commitment so that I don’t back off and really develop this habit.

That’s it for now. Meet you next time very soon(HOPEFULLY). Till then, Be Curious and Keep learning!

by gutsytechster at January 14, 2019 02:58 PM

January 11, 2019

Vishal Singh Kushwaha (vishalIRC)

Everything has a story

While reading a bunch of math on a piece of paper, one rarely gets enough time to contemplate on its origins. No-one is born with the answers. No-one gets handed a step by step plan, a plan which definitely leads to what one is destined to do. One’s purpose in life then, is nothing but an illusion created by society and his yearning for control.

Every once in a while, we work on something for the sake of the thrill, the fulfilment of getting the work done. Of achieving that milestone. Until the next one comes along, life has purpose.

As human beings we like listening to stories, and telling them. This is essential because our brains are capable of processing and retaining it, very well. Therefore, we must be careful about the stories we make about ourselves. You write your own story, then it is possible to make a bad draft the first couple of hundred times.

We underestimate ourselves, our abilities: I can never become an astronaut! well you never became one. I want a decent job! well you got one. You will only go as far as you think you can, or as far as your protagonist goes.

Well then, what’s your story? and is it any good? Sure hope to see you when you’ve taken the red pill.

Vishal K.

by vishyboy at January 11, 2019 10:41 PM

Piyush Aggarwal (brute4s99)

Arch

Simplicity is the ultimate sophistication.
-Leonardo da Vinci

After eons of self-doubt and mixed opinion, I finally decided to get Arch Linux up and running in my laptop!

How it all began?

My mentors at IRC insisted upon switching over to latest Linux distros. The reason was implicit: to work with packages having latest features. My IRC friends at #dgplug suggested me a few flavors to choose from- latest Ubuntu build, latest Fedora build, or a rolling release distribution.

What’s a Rolling Release?

A rolling release is a type of linux distribution model in which instead of releasing major updates to the entire operating system after a scheduled period of time, a rolling release operating system can be changed at the application level, whenever a change is pushed by upstream.

There are a couple of rolling release models – semi-rolling and full rolling – and the difference is in how and what packages are pushed out to users as they become available.

A semi-rolling distribution, such as Chakra Linux and PCLinuxOS, classifies some packages to be upgraded through a fixed release system (usually the base operating system) in order to maintain stability.

A full rolling release, such as Gentoo, Arch, OpenSUSE Tumbleweed, and Microsoft Windows 10, pushes out updates to the base operating system and other applications very frequently – sometimes as often as every few hours!

Why switch?

The main benefit to a rolling release model is the ability for the end user to use the newest features the developer has enabled. For example, one of the newer features of the Linux kernel, introduced with the 4.0 update, was the ability to update the kernel without restarting your computer. In a rolling release distribution, as soon as this update was tested and marked as working by the development team, it could then be pushed out to the user of the distribution, enabling all future updates to the kernel to occur without computer restarts.

What’s new?

For ubuntu users, it’s the same thing as if you come to Linux from Windows; there is a learning curve.

An excerpt from the Arch Wiki states thus:-

Arch Linux is a general-purpose distribution. Upon installation, only a command-line environment is provided: rather than tearing out unneeded and unwanted packages, the user is offered the ability to build a custom system by choosing among thousands of high-quality packages provided in the official repositories for the x86-64 architecture.

Oh, and one more thing- none of the proprietary softwares/packages/drivers come with the base installation. Read more about them here.

If you still think you can steer clear off the proprietary softwares , think again, and one more time.

Theory’s over.

Baby Steps

A few pointers before we start the installation:-

  1. The installation requires a working internet connection. So, I had a wired ethernet connection ready at my disposal. A WiFi module that’s NOT with a Broadcomm Chipset would be just as fine. Since I have the Broadcomm Chipset, I switched to wired connection for the time being.
  2. Once I’ll get the installation done, all I’ll have is a bare-bones installation with a log-in shell. You must absolutely be comfortable with the terminal, as almost no graphical utility comes out of the box.

I grabbed a USB and prepped it with this Arch Linux img. First thing after booting from the USB – I connected to the internet.

In case you have Broadcomm chipset in your WiFi, follow this. You need the driver firmware for the Broadcomm chipset to get it working on your laptop, since it’s proprietary.

Connecting to the Internet

Connecting to internet via Ethernet

Just Plug n Play, you’re good to go!

Connecting to internet via WiFi

1.Create a profile for your wifi in there

# wifi-menu

2.Connect to the profile you set by

# netctl start <profile_name>

3.If you want to enable it to connect automatically at startup

# netctl enable <profile_name>

Connecting to internet via Android USB tethering

1.List all your DHCP interfaces that are now available

$ ls /sys/class/net

2.Connect to the new inteface provided by Arch for your USB tethered device!

# dhcpcd <enp....something_profile_name>

Check if you’re online : $ ping -c3 google.com


There are many good tutorials out there, follow any one of these.

Now that Arch was installed, I booted up the system and got connected to the internet again.

image

Now that I was online, I set up a GUI !

Installing a GUI

So the first thing I decided to get for the Arch was GUI! It’s a quite simple procedure, you need a display manager, and a Desktop Environment to interact to the X server.

X Server

X is an application that manages one or more graphics displays and one or more input devices (keyboard, mouse, etc.) connected to the computer.
It works as a server and can run on the local computer or on another computer on the network. Services can communicate with the X server to display graphical interfaces and receive input from the user.

My choice: # pacman -S sddm plasma

IMPO ! Install a terminal before rebooting to GUI !

# pacman -S konsole

Configuring terminal

Sources + References:-

  1. http://jilles.me/badassify-your-terminal-and-shell/

Configuring weechat

Sources + References:-

  1. https://alexjj.com/blog/2016/9/setting-up-weechat/
  2. https://wiki.archlinux.org/index.php/WeeChat

Surfing through some sites also got me through a good command that would be of much help to most!

/mouse enable # In case you’d like to use the mouse in weechat

/redraw # A saviour for guys SSH-ing to any ZNC

You can’t find the packages through pacman?

Enter AUR : the Arch User Repository

Suppose I have to get a package that cannot be found by pacman. I will try to find it at AUR home page.

for eg : ngrok. Now, after reading description, I know this is the package I was looking for. So, now I will see how I can acquire the package.

Here I can see two ways to acquire the package- by git clone (preferred), or by downloading the tarball.

It gives me one file : PKGBUILD . These PKGBUILDs can be built into installable packages using makepkg , then installed using pacman .

Fakeroot

Imagine that you are a developer/package maintainer, etc. working on a remote server. You want to update the contents of a package and rebuild it, download and customize a kernel from kernel.org and build it, etc. While trying to do those things, you’ll find out that some steps require you to have root rights (UID and GID 0) for different reasons (security, overlooked permissions, etc). But it is not possible to get root rights, since you are working on a remote machine (and many other users have the same problem as you). This is what exactly fakeroot does: it pretends an effective UID and GID of 0 to the environment which requires them.
P.S:-

  • UID: User ID
  • GID: Group ID

The git clone method is preferred since you can then update the package by simply git pull.

Why so much fuss ?

You can always try out AUR helpers. I set up yay in my configuration, since it also shows DIFFs when installing new/upgrading packages through AURs.

Why would you want to read DIFFs?

Essentially, it’s a shell script,(so it can possibly have mailicious / dangerous content, so look before you leap) but since it’s ran as fakeroot, there is some level of security albeit. Still, we shouldn’t try and push our luck.

So after all this, I successfully set up Arch Linux, WiFi, Desktop Environment, Terminal and Weechat in my laptop! Next was installing basic software packages and fine tuning the GUI to my personal tastes.

Firefox Developer Edition – For Web Browsing

tor-browser – For private internet access

Konsole – Terminal

Deepin Music Player – Music Player

Gwenview – Image viewer and editing solution

Steam – for Games

Kontact – for updates on calendar events

VLC – Video player The end result

image

beautiful, isn’t it?

Setting up a personal Arch Linux machine taught me many things about the core Linux system, how exactly the system is set up during installation and how different utilities orchestrate to form my complete workstation ready to build beautiful code and software!

January 11, 2019 05:31 PM

My Testimony about Blockchain - Part 2

They’ll what ?

They’ll fork off of the network.

A byproduct of distributed consensus, forks happen anytime two miners find a block at nearly the same time. The ambiguity is resolved when subsequent blocks are added to one, making it the longest chain, while the other block gets “orphaned” (or abandoned) by the network.

But forks also can be willingly introduced to the network. This occurs when developers seek to change the rules the software uses to decide whether a transaction is valid or not. Forks can be classified into two- hard and soft forks; both have different implications for the network and ecosystem.

Hard forks are a permanent divergence in the the block chain, commonly occurs when non-upgraded nodes can’t validate blocks created by upgraded nodes that follow newer consensus rules.

Soft forks are a temporary divergence in the block chain caused by non-upgraded nodes not following new consensus rules

Miners can add blocks to the blockchain so long as every other node on the network agrees that their block fits the consensus rules and accepts it.

The Block Header

So what do these miners do exactly? They hash the block header. It is 80 bytes of data that will ultimately be hashed.

The header contains this info:

Name Byte Size Description
Version 4 Block version number
Previous Hash 32 This is the previous block header
Merkle Root 32 The hash based on all of the transactions in the block
Time 4 Current time stamp as seconds (unix format)
Bits 4 Target value in compact form
Nonce 4 User adjusted value starting from 0

genesis

A snap of the latest block at Bitcoin blockchain at the time of writing.

How would the consensus deem a mined block as accepted?

See the Bits part ? It is the Integer (Base 10) representation of the target that is to be achieved by the miners. The target is the 256 bit hash sum of the block header. It is the MAXIMUM value acceptable by the consensus for the hash.

MAXIMUM value?

I thought you’d never ask! See the nonce part in the block header? Yup, miners need to start all the way from 0 (some may try to skip values, completely up to miner) to the number that when used in the block header, yields a hash sum below the target. See the nonce in the latest block image? The miner who successfully relayed this value to the nodes received the price money ie 12.5 BTC! That’s a lot of work and indeed a lot of bucks!

People buy special hardware (recent scarcity of GPUs? Curse those miners) and even computers specially built for this purpose! Ever heard of ASICs?

As it stands, mining on your won, on your single PC is almost dead. The process of finding blocks is now so crowded and the difficulty of finding a block so high that it would take over an year to generate any coins on an average high-end gaming system. While you could simply set a machine aside and have it run the algorithms endlessly, the energy cost and equipment degradation and breakdown will eventually cost more than the actual bitcoins are worth.

Pooled mining, however, is far more lucrative. Using a service you can split the work among a ground of people. Using this equation:

(12.5 BTC + block fees – 2% fee) * (shares found by user’s workers) / (total shares in current round)

Putting it simply, it is basically how the system works. You work for shares in a block and when complete you get a percentage of the block reward based on the number of workers alongside you. More the people in pool, higher the chances of rewards.

Types of Blockchains in use

Any blockchain can be classified into any one of these categories-

Public Blockchain

The most basic of all blockchain concepts. This is the blockchain everyone uses out there.

The most basic features of this bockchain are –

Anyone can run a BTC/LTC full node and start mining.
Anyone can make transactions on the BTC/LTC chain.
Anyone can review/audit the blockchain in a Blockchain explorer.

Example: Bitcoin, Litecoin etc.

Private Blockchain

Private blockchain as its name suggests is a private property of an individual or an organization. Unlike public blockchain, here there is actually someone in charge who looks after important things such as read/write or whom to selectively give access to read or vice versa. Here the consensus is achieved on the whims of the central authority who can give mining rights to anyone or not at all!

Example: Bankchain

Consortium Blockchain

This type of blockchain tries to remove the sole autonomy which gets vested in just one entity by using private blockchains.

So here you have multiple authorities instead of just one. Basically, you have a group of companies or representative individuals coming together and making decisions for the benefit of the whole network. Such groups are also called consortiums or a federation; ergo the name consortium or federated blockchain.

For example, let’s suppose you have a consortium of world’s top 20 financial institutes out of which you could decide that if a transaction or block is voted/verified by more than 15 institutions, only then does it get added to the blockchain.

Example: r3, EWF

In fact, the idea that cryptographic keys and shared ledgers can incentivize users to secure and formalize digital relationships has imaginations running wild. Everyone from governments to IT firms to banks is seeking to build this transaction layer.

Authentication and authorization, vital to digital transactions, are established as a result of the configuration of blockchain technology. The idea can be applied to any need for a trustworthy system of record.

January 11, 2019 05:31 PM

It's a blog !

his image

This is the first post that comes with the blog by default.

Let’s see.

I made a blog.

Let’s try our best to make it useful, yeah ?

I don’t wish you all to be watching ads with my blog, so just wait for a while!

Good company in a journey makes the way seem shorter.
— Izaak Walton

January 11, 2019 05:31 PM

My Testimony about Blockchain - Part 1

Blockchain is a vast, global distributed ledger or database running on millions of devices and open to anyone, where not just information but anything of value — money, but also titles, deeds, identities, even votes — can be moved, stored and managed securely and privately. Trust is established through mass collaboration and clever code rather than by powerful intermediaries like governments and banks.
–Wikinomics

So I’ve been reading all about blockchains (even those 12 point font research papers!). This is a rough gist of what I learnt:-

A distributed ledger

Wikipedia explains thus –

“A distributed ledger is a consensus of replicated, shared, and synchronised digital data geographically spread across multiple sites, countries, or institutions. There is no central administrator or centralised data storage.”

This seems too much condensed. Let me break it down for you.

  • There is no central authority.
  • Every transaction here occurs in front of an array of guards that maintain order and make sure the transactions are completed in full by both parties.
  • These guards are just some computers that have volunteered to become a ‘node’. Only these nodes can validate the transactions of every user on a blockchain.

Before we go any further, I need to tell you what a transaction means in this context.

A transaction occurs when there is an exchange of data between any two parties. It need not be money only. It can be any data, you can even make a deal involving official papers of properties through some blockchain implementing platform!

And if this sounds scary, don’t worry; no-one, not even those nodes (the ones which supervise the transactions) know what exactly you exchanged! Kudos to privacy! And that’s not even half of it! I’ll explain more later. conventional central authority Consider the conventional case of a bank (a conventional central authority).

NOTE : We are using ‘bank’ as an example just because it comprises a good amount of ‘transactions’. Always remember that these ‘transactions’ can be of data or goods too!

So here, in a bank, all the transactions occurring between accounts would be verified by a single, central authority, and all your possessions currently with the bank would be at the mercy of the whims of the bank, the single point of security in the transaction. If, by any chance, the bank burns down (physical damage to central authority) or gets robbed (or hacked), or seizes your account (unethically or otherwise) , there would be consequences, the likes of which you most probably won’t be comfortable with.

Enter blockchain with the power of consensus based distributed ledger! If we consider the case of bitcoin blockchain, there are about 7000 nodes in the network that all work for the security of all those precious bitcoins that keep soaring and falling by the minute. For bitcoin to fail, all these 7000 points of security would have to be attacked at the same time, or at-least half of them. Not only that, with the sky-high pricing of these virtual currencies, more and more people are opting in to become nodes, which adds to security of the users(traders) making transactions over bitcoin blockchain. So that’s security for you and the ‘things’ you love! If you wish to know more about blockchain that deals with data, check out ethereum. Ethereum is an open-source, public, blockchain-based distributed computing platform and operating system featuring smart contract functionality.

Block

A block is the ‘current’ part of a blockchain, which records some or all of the recent transactions. Once completed, a block goes into the blockchain as a permanent database. Each time a block gets completed, a new one is generated. There are countless such blocks in the blockchain, connected to each other (like links in a chain) in proper linear, chronological order. Every block contains a hash of the previous block. The blockchain has complete information about different user addresses and their balances right from the genesis block to the most recently completed block. Every node on the blockchain has a copy of the ledger with themselves, that gets synced after creation of a new block.

The ‘what’ block ?

Every blockchain has to start somewhere, so there’s what’s called a genesis block at the beginning. This is the first block, and there, at the beginning, the creators of Ethereum (or any other cryptocurrency) were at liberty to say “To start, the following accounts all have X units of my cryptocurrency.” Any transfer of data on the blockchain will have originated from one of these initial accounts (or from mining).

The blockchain was designed so these transactions are immutable, meaning they cannot be deleted. The blocks are added through cryptography (more, later), ensuring that they remain meddle-proof: The data can be distributed, but not copied (a node never knows exactly what’s in these transactions). You can always see a block yourself by using a Blockchain Explorer.

Privacy – how?

The blockchain isn’t just a bunch of computers watching that A sent something to B in return for some data; it’s so much more than that! On-chain transactions refer to those cryptocurrency transactions which occur on the blockchain – that is, on the records of the blockchain – and remain dependent on the state of the blockchain for their validity. All such on-chain transactions occur and are considered to be valid only when the blockchain is modified to reflect these transactions on the public ledger records.

What the crypto?!

So how does cryptography exactly fit in with this blockchain? It’s simple- the nodes lock the data with a 256 bit number (Hash Sum) that represents the data within a block. A different blockchain may use a different hash function, but the basic idea of its integration in the blockchain remains the same (more or less).

Hashing Functions

A basic idea of any hash function.

source

If you look closely, you’ll notice even a slight change (even just 1 bit) in the data would create a different hash sum altogether. There is simply no pattern at all!

So here comes the answer to a question that might’ve struck you-

Why would anyone waste her/his own electricity and compute power to validate my transactions? Social service? Repentance out of guilt?

It’s MONEY!

There are nodes, there are traders, then there are MINERS.

Miners are a subset of nodes as all miners must be running a full node (ie they must have complete ledger with themselves) in order to mine (at least to mine properly). The nodes are what determine consensus as all nodes must agree to the same rules otherwise the nodes will fork off of the network.

Continue Reading

January 11, 2019 05:31 PM

Recovering your Arch from hell

Rebuilding an Arch

easier than it looks

PROBLEM

Not clear, but looks like misconfigured packages after multiple installations, uninstallations and re-installations of packages and Desktop Environments

PROLOGUE

So today I had problems that caused KDE Plasma to not acknowledge my laptop as a laptop. In other words, my Arch was on the edge of collapse.

BABY STEPS

So, I tried reinstalling all the packages of my installation in one command, like so

# pacman -Qenq | sudo pacman -S -

But as you can see the post hasn’t ended here, it didn’t pan out.

SOLUTION

After hours of help at #archlinux and #kde-plasma, I found this Forum page that gave me just the right instructions!

  1. First up, I removed all the orphaned/unused packages rotting away in my system.

    # pacman -Rns $(pacman -Qtdq)
  2. next, I force-reinstalled all the packages I had in my installation.

    # pacman -Qqen > pkglist.txt
    # pacman --force -S $(< pkglist.txt)

EPILOGUE

Now my installation is sweet as candy with no loss of any personal configs, and everything is perfect again!

😄 🎉

January 11, 2019 05:31 PM

ABCs of Unix

UNIX

A is for awk, which runs like a snail, and
B is for biff, which reads all your mail.
C is for cc, as hackers recall, while
D is for dd, the command that does all.
E is for emacs, which rebinds your keys, and
F is for fsck, which rebuilds your trees.
G is for grep, a clever detective, while
H is for halt, which may seem defective.
I is for indent, which rarely amuses, and
J is for join, which nobody uses.
K is for kill, which makes you the boss, while
L is for lex, which is missing from DOS.
M is for more, from which less was begot, and
N is for nice, which it really is not.
O is for od, which prints out things nice, while
P is for passwd, which reads in strings twice.
Q is for quota, a Berkeley-type fable, and
R is for ranlib, for sorting ar table.
S is for spell, which attempts to belittle, while
T is for true, which does very little.
U is for uniq, which is used after sort, and
V is for vi, which is hard to abort.
W is for whoami, which tells you your name, while
X is, well, X, of dubious fame.
Y is for yes, which makes an impression, and
Z is for zcat, which handles compression.

— THE ABCs OF UNIX

January 11, 2019 05:31 PM

Blurred Lines

PROLOGUE

While setting up Dev environment on my CAIRO-STATION (desktop computer at my home), I realized I could not install Linux on that, since the system will be used by all family members. My best bet would have been a VM or some sort of Containerization. Then I recalled my early development days, and realized both of these are inferior to Windows Subsystem For Linux (WSL)

When I first discovered WSL as an optional feature in Windows 8.1, I was busy jumping between playing Just Cause 2 (a really great Open-World game, you MUST check it out!) and studying for XII “Board Exams”. I had a slight taste of Linux back then, enough to perform the most basic functions- ls, cd, and screenfetch (my favorite).

Then, last year I saw Microsoft announce 3 more Linux flavors for WSL incoming at Build Developer Conference, all I understood was more screenfetch outputs to bask in!

Motivation

Today, after an year of experience and two incredibly knowledgeable months at DGPLUG, ideas have become more achievable.

Today, an idea struck my mind-

Developers can finally use Ubuntu through command line interface, great! If they could also use GUI apps fired from within the Ubuntu bash CLI, ah that would have been lovely.

Ever since I installed Arch Linux on my system over days of research, I came to appreciate all the nit-bits and procedures involved in installing an OS and everything that a proprietary-software user believes should come with it.

Since now I possessed the knowledge I needed to pull this off, I fired up my home PC and I was ready to hack!

Baby Steps

  1. Got Ubuntu 18.04 from windows store.
  2. Turned ON WSL for my system.
  3. Updated all packages and installed screenfetch.

The New Part

The X Server handles outputs to the GUI, and a variable DISPLAY needs to point to the X Server. This was done by the command DISPLAY=:0. To avoid running it manually everytime, I appended it to my shell by the command: echo 'DISPLAY=:0' >> ~/.zshrc.

Now, all that I needed was an X Server that serves well!

My first attempt at X-Server for Windows was XMing, but unfortunately it couldn’t be detected by the WSL. XMINGnotDetected

X Server could not be found by WSL

Next up, I tried another procedure which goes as follows :-

  1. WSL opens up a TCP type port 2222 for SSHing.
  2. I SSH through PuTTY and enable X11 Forwarding inside it.
  3. The X Server used was still Xming.

The result is in the following photograph. XMingDetected

Some Progress!

There still were problems with this set up- Latency. It seems obvious that I am SSHing into my own system, which is not wise. So, now I decided to get a better terminal application and get this show on road!

So, this time, I installed ConEmu. For those who are having a hard time shifting from Linux to the Command Prompt or Powershell, this is a relief. ConEmu is extremely customizable and rock-solid!

Also, I changed my X Server to MobaXterm, which does a far better and simpler job at handling X Server and related tasks(Servers,Tunneling,Packages,File System )

Final Set Up

The MobaXTerm X Server starts at Log In, and firing up ConEmu gives me a Ubuntu CLI.

Testing

XMingDetected

Mozilla Firefox

XMingDetected2

VS Code

I also followed a blog post by Nick Janetakis on setting up Docker to work with WSL flawlessly!

The setup he used was my inspiration for the post, and I hope it would serve me well for my oncoming endeavors!

January 11, 2019 05:31 PM

A real life Hogwarts

Skimming through your seniors’ profile does some good at times!

“The programmers of tomorrow are the wizards of the future !”
– Gabe, Co-founder-Valve

dgplug

LINUX USERS’ GROUP OF DURGAPUR
Learn and teach others

An excerpt from the official site :-

Objectives

  • Revisiting programming fundamentals
  • Get acquainted with Free Software technologies
  • Spreading the hacker ethics
  • Gaining technical knowledge -Real-world project experience

What I have learnt within the month at #dgplug online summer training is invaluable to me! We get to talk and learn from the Jedi of F/OSS, attend Guest sessions with international upstream contributors and so much more!

An excerpt from a qoura answer :-

How is the summer training at Dgplug?

For me, it was like Hogwarts, a place which normal people don’t know, yet full of surprises, and new learning! It opened a whole new world for me!
-Avik Mukherjee

And frankly, that makes the two of us.

January 11, 2019 05:31 PM

A better blog

his image Hi Greg! I saw your Github profile had this starter for a personal blog, so I thought

Why not!?

After days of scouring the internet, I finally landed on GatsbyJS starters, because

  1. There were seriously NO good themes for Nikola.
  2. I didn’t like the UI provided by Pelican.
  3. Gatsby looked good enough to be my blog :tongue:
  4. I did not wish to learn Front-end for the next month to make the Gatsby site out of documentation!
  5. Greg has made a masterpiece out of all the techs in the left bottom of the start page!

When I started looking for themes beforehand in GatsbyJS starters, and landed on your GitHub, I was elated only to find out you had the perfect fit waiting to be found!

Next up, I will be moving all my posts from Wordpress over here and publishing some new ones soon.

January 11, 2019 05:31 PM

January 04, 2019

Kuntal Majumder (hellozee)

It is New Year my dudes

If you don’t get the meme reference from the title, here is one more for you : Enough of memes, lets talk about something trendy, something that everyone is talking about cause January 2019 is all about setting up goals and resolutions, I am not being punny here.

January 04, 2019 06:01 PM

December 24, 2018

Kuntal Majumder (hellozee)

Flashback 2018

tl;dr, Typical end of the year post as you may expect. Well, 2018 was a significantly productive year for me compared to other years, learned so many things which I wanted to, plus added more things to the bucket list for the coming year.

December 24, 2018 11:19 AM

December 21, 2018

Kuntal Majumder (hellozee)

How to Learn

If you search the web for how to learn something, you will surely get a bunch of techniques that would help you to remember something but that is not the learning I am talking about, that is in a sense, a kind of memorization.

December 21, 2018 03:09 AM

December 19, 2018

Pradhvan Bisht (pradhvan)

Memory Managment in Python – Part 2

In the last part, we checked out how variables are stored in Python and how Python handles memory management with reference counts and garbage collector. If you haven’t checked it out and want to here is the link.

In this part, we will dig deep a little on how reference counting works and how it can increase or decrease taking in account the different cases. So let’s start where we left off, every object in python has three things

  1. Type
  2. Reference Count
  3. Value

Reference Count is a value showing how much an object has been referred(pointed) too by other names(variables). Reference counting helps the garbage collector in freeing up space so the program can run efficiently. Though we can increase or decrease the value of the reference count and can check the value with the inbuilt function called getrefcount().

Let’s take a small code snippet:

import sys

a = []

# Two reference one from the variable and one from the getrefcount() function

print(sys.getrefcount())

2

Though the examples look great and everything seems to be working but I did kinda trick you, first thing is that not all reference count values start from 0 so if you do the same example with a different value of the output may be different. Reference count values are calculated on two factors number of times the object is used in the bytecode and the number of time it’s been referenced to this includes your previous programs too.

Let’s look into another example:

import sys

a = 100

print(sys.getrefcount(a))

4

b = 100

print(sys.getrefcount(b))

5

c = 100

print(sys.getrefcount(c))

6

When more variables reference to the same value the refrence count increase. But this is not the case when we take into example the case of container objects like lists and constants.

import sys

ex_list = [a,b,c,d]

print(sys.getrefcount(a))

8

print(sys.getrefcount(b))

9

print(sys.getrefcount(c))

10

print(sys.getrefcount(d))

11

del ex_list

print(sys.getrefcount(a))

7

print(sys.getrefcount(b))

8

print(sys.getrefcount(c))

9

print(sys.getrefcount(d))

10

# Same thing goes with constants

print(sys.getrefcount(10))

12

const = 10

print(sys.getrefcount(10))

13

const = const + 10

print(sys.getrefcount(10))

12

As we saw container objects here list refer to other objects referring to the value thus when we delete them the reference link is deleted thus objects inside the list decrease the reference count by one. The same happens with constants too, when the variable they get referenced to is incremented the reference count is decremented.

By now you must have realized that del does not actually delete the object on the contrary it removes that variable(name) as a reference to that object and decrease the reference count by one.

All the example we saw are kinda similar considering the fact that they are in the global namespace but what happens when there are function what happens to the reference count then, let’s find out through this code snippet

import sys

num = 100

print(sys.getrefcount(num))

4

def ytf(number):

print(sys.getrefcount(num))

ytf(num)

6

print(sys.getrefcount(num))

4

We saw that when ytf() got into scope the reference count increased while the reference count decreased when the function got out of scope. Keeping this in mind that we should be careful of using large or complex objects in the global namespace because an object in a global namespace don’t go out of scope unless we decrease the value of the reference count thus a large object would consume more memory making the program less efficient.

That’s all for this part, in the next part we would look closely into the garbage collector and how it functions inside a python program in freeing up memory.

 

 

by Pradhvan Bisht at December 19, 2018 12:20 PM

December 16, 2018

Pradhvan Bisht (pradhvan)

Memory Managment in Python – Part 1

Stumbling upon a Python code snippet from a GitHub repo I did come to realize that in Python variables don’t actually store values they are assigned too, variables actually store the location to the value. Unlike in C++/C which actually creates a space of a fixed size and assigns it to a variable created which we usually call a bucket/room while explaining variables to a beginner “what variables are?” in programming.

Thought python variables are a bit different they work like keys which points to a particular room in the hotel(memory space) so whenever we make an assignment to available we are not creating rooms rather creating keys to that room which is freed/overwritten by the python’s garbage collector automatically. (More on the topic of garbage collector later). So the point is when ever we do something like

a= 10

b = 10

id(a)

94268504788576

id(b)

94268504788576

We are optimizing here and creating two same keys which points to the same room in the hotel (memory) thus they would have the same id but this kind of optimization works only with the range of integers from -5 to 256 if you exceed the range, variables would point to two different storages thus will have different id().

Just don’t get confused why we did not use “==” instead used id(), == checks whether the value pointed by the variables are same or not inside the object and id() checks if it uses the same object or not because every object has a unique identity which can be checked by the id() function.

Following the official docs id(), the value is the address of the object in the memory.

Coming back to the code snippet from the GitHub repo given below and applying the same knowledge of integers to strings.

a = "wtf"

b = "wtf"

id(a),id(b)

(139942771029080, 139942771029080)

a = "wtf!"

b = "wtf!"

id(a),id(b)

(139942771029192, 139942771029136)

a= "hello world this is a string"

b= "hello world this is a string"

id(a),id(b)

(139942770977328, 139942770977408)

The same kind of optimization happens here too when the strings are small they refer to the same object in the memory rather creating a new one thus saving memory, this kind is called interning. But when the object becomes bigger or contains ASCII letters, digits or underscore they are not interned.

This shows abstraction at it’s best and python is very good at it, it does all the heavy lifting job for you of allocating/deallocating memory for you and lets you focus on the other parts of the program. Until you really want to know what’s happening and I assume you do that’s why you are reading this blog 😛

Though this explanation was already available on the repo. I wanted to know more about how memory management happens internally so I stumbled about a talk of “ Memory Managment in Python – The basics” by nnja. So yeah people with nicks like ninja are great with Python and Fortnite hahaha! (I could not resist myself from posting this joke and just to clear things out ninja is one of the top Fortnite players)

Thus if you see technically Python does not have variable but  ‘names’ which refers to other objects or names and Python likes to keep a count of all the references called as reference count of all the object’s references. So if the reference count of an object decreases to zero this means no reference is made to that object which as seen by the Garbage collector of python as free space thus the object is deleted and the space is free to use.

We as a programmer can increase or decrease the reference count of an object as a python object stores three things:

  • Type: int,string,float
  • Reference Count
  • Value

Seeing the first code snippet of the blog where two names a and b in which both names points to the same object with the value 10 with the reference count of 1 and type as int.

That’s all for this part, will cover the same topics in a bit detail in the next part. Still confused in some of the things so keeping this plain and simple for future me to look back when I am lost and can’t even notice the simple things 😛 and for someone who just wants to know this briefly.

by Pradhvan Bisht at December 16, 2018 01:16 PM

December 11, 2018

Pradhvan Bisht (pradhvan)

REST APIs with DJANGO

I recently finished REST APIs with DJANGO by William S. Vincent it’s not lengthy book hardly 190 pages but does pack a lot of information if you are just starting out with building APIs with Django and REST API in particular.

It’s well written so it’s easy to understand and takes into notice that you have just started out with Django though this could be a bit frustrating to read if you have been making some apps in Django because it explains a lot of basics concepts that I assume most of the readers would know about. It uses Django 2.1 and uses Pipenv for virtual environment instead of venv so that was new 😛

tl;dr A lightweight and simple book that packs a lot if you are just starting out with REST APIs.

I picked up this book because I wanted to work on Django Rest Framework and while reading a blog from the same author I noticed his book at the end. I liked the blog and did a quick search to check out reviews on the book, the reviews were positive so I bought the book for my Kindel.

Screenshot from 2018-12-11 21-23-49

The book mainly revolves around three projects that are covered in the nine chapters of the book though eight I should say as the first chapter talks about the basics of World Wide Web, IPs, URL,API, endpoints,HTTP and ends with explaining what REST APIs are. Well, the take away was that REST is just an architecture approach of building APIs  and REST API at minimum follows these three principles:

1. It’s stateless

2. It supports GET,POST,PUT,DELETE(HTTP Verbs)

3.Returns data in either JSON or XML format

So where does Django REST framework comes into play you ask? It’s simple ! it creates API which contains all the HTTP Verbs that return JSON.

One more thing not to confuse you and all, just clearing things up 😛 Django creates websites containing web pages and Django REST framework creates web API both are two different separate frameworks.  Yes! both can be used simultaneously in a single web app.

Now that I have flaunted my newly acquired knowledge let’s move forward with this review. haha!

The book helps you build three different projects a very basic library website API this is the first project in the book so it’s just to get you all set with the process but does an important job of helping you distinguish between Django framework and Django Rest Framework.

The second one is a ToDo API with the React front end though I think it’s just put in the book to either get you to use to of making REST APIs which can be repetitive at times or just to get a beginner programmer to think that ” oh ! covers react too, nice !” (bait). If you skip the chapter nothing would happen.  For those of you wondering I did not skip the chapter, I had to write this fancy review #dedication, haha!

The project that you would get the most out of is the last one and it’s the most basic thing that every Django developer builds when he/she starts out learning about Django. You guessed it right, a blog website so this book helps you build Django API

The whole project is spread around 5 chapter and broadly covers user permissions, user authentication, viewsets, routers, and schemas. It gives you enough understanding that you can look up Django REST Framework documentation with an ease plus I think author took this Blog API project in particular because it would be a nice if a beginner who had started with the Django Girls tutorial can make those changes in that particular project so he would get even better understanding with something to work on by himself. Which I would highly recommend doing and would be doing now.

I would rate this book 3.9/5.

A must buy if you are beginner in Django or just starting out making REST APIs or even both . If you have a decent experience with Django and know your internet jargons well I would suggest going with the official documentation.

 

by Pradhvan Bisht at December 11, 2018 05:19 PM

December 10, 2018

Kuntal Majumder (hellozee)

Recreating the Marvel Intro with Python and Nuke

Let me start this one with a story. Once there was a kid who loved to play games, after a while, he wanted to make games of his own and tada a programmer was born.

December 10, 2018 09:37 AM

December 07, 2018

Pradhvan Bisht (pradhvan)

To start, just start !

It’s almost end of 2018 and most of the people start working on their new year’s goals, well I am doing that too a bit early I guess 😛

I usually plan to write A LOT but all the time what I have seen is that once I start a blog and work on it, I give up on it either mid-way or sometimes it doesn’t reach the half-way mark the reason being I want it to be just perfect. What I mean to say it that I write blogs with the uttermost detail like the last blog that I start and could not publish was of the PyCon India 2018. I wrote a lot in it and gave almost every good detail I remembered from the event but like most of the blog, it could only make it to the halfway mark.

I don’t know what it is either I think I have a massive audience that eagerly waits for my blog or I wanna be that kid that writes awesome blog everytime he sits on the computer to write one. Whatever it is! I wanna change that. I want to write blogs frequently. So while finding a solution to this problem I remembered some lines from a podcast  CodeNewbie, in which Julia Evans was the guest. (she writes awesome blogs, go check them out if you haven’t ) Julia mentions that while she was in Recurse Centre she picked up this technique of writing small bit of blogs every day but the problem was she had a lot going on during that time so she didn’t use to get much time to write but she managed to that by writing consistently without thinking about much of the size of the content or thinking of writing just the perfect blog.

The thing I took from the conversation was to write frequently without worrying much about the factor and it’s not like many people read my blog so they would just find me to insult about my poor blogs, haha!  Only a handful of good people from #dgplug read it 🙂  so I got nothing to lose.

Things I would follow from now it:

  1. Keep the blog short and crisp 400 words or less
  2. Blog every second day, I do read a lot these days mostly tech so I can put those notes up in the form of blogs that would help me in the future too.

So yeah hopefully you will be seeing a lot of my blog(mostly bad in the starting so I apologize in advance) from now 😛

by Pradhvan Bisht at December 07, 2018 02:30 PM

November 29, 2018

Aman Verma (nightwarriorxxx)

Learning to talk to computers with python

“Patience and perseverance have a magical effect before which difficulties disappear and obstacles vanish.”

-John Quincy Adam

Starting from new ,aiming to be more consistent this time I joined Operation Blue Moon today (initiative by @mbuf). The aim of mine of Learning to talk to computers will be tough I know but I also know I have to push my limits rather than just sitting inside my comfort zone. Hoping everything will be fine and will try to stay as much positive and focused as I can.

Coming to some cool stuff I learned today whose all credits goes to @kushaldas(the insane).

Command prompt from python

 
#! /usr/bin/env python3
from cmd2 import Cmd
class repl(Cmd):
    def __init__(self):
        Cmd.__init__(self)

if __name__=='__main__':
    app=repl()
    app.cmdloop()

Save the script and run it. Use the command line command with

!

.
Example

!ls,!pwd

.

Tip of the day:-

Never use `pip` with `sudo`

Happy Hacking

by nightwarrior-xxx at November 29, 2018 06:30 PM

Sehenaz Parvin

Can we?

Can we please stop using filter on our photos? Can we just stop thinking ourselves a sheet of white paper with no marks? Can we please stop from doing what we don’t want? Can we please stop caring others ? Can we please stop ourselves from behavioral sciences? Can we please stop insulting others ? Can we stop demotivating others? Can we stop judging each others personal views? Can we stop telling what to wear ? Can we stop telling how to live life? Can we stop telling what to choose? Can we stop advising ? Can we stop hating others? Can we stop our so called show-off in public? Can we please stop blaming others? Can we please stop imposing others? Can we please stop !!!!!! Can we please come out of this trap!!! Can we wait a second!!!

Can we be “us” for a minute ? Can we be “me” for a minute? Can we please stop blaming ourselves for everything? Can we please use our makeup for looking prettier not shitier ? Can we please use normal filter one day? Can we please think normal and do what we want? Can we please stop off- showing why we are not? Can we please not take ourselves granted for. Minute? Can we please think about “me” for a minute? Can we stop elbowing each other in the want of limelight? Can we stop thinking about what people will think and judge us? Can we stop changing ourselves for others?

Can we? Please women it’s high time now. Stop pretending! Start perceiving! Start convincing ! We all are beautiful souls. Don’t ditch yourself , your efforts , your dreams and your soul to petty foggers . We use make up not to look prettier but to stop ourselves from looking shitier! This is not done! We fear to click Photos using a normal filter. Why? For whom? And for what? We should not look sexy for getting likes or life partner. We should not apps for finding our life partner! Take a break from this biased society. Don’t live your life according to them. Don’t do something which is in trend . We are all different and that’s what makes us special. We are special already. We just need to recognise ourselves. Live your dream! That’s what is gonna make “you “happy.

I hope you all will agree with me. We all already know this but we don’t do this! Start believing in “you”. The “you” world is very beautiful.💕 Get out of the reel and face the real. That’s gonna be a revolutionary transformation.

by potters6 at November 29, 2018 05:47 PM

November 21, 2018

Ashish Kumar Mishra (ash_mishra)

Hackman 2k18

Hackman 2k18 was the 3rd version of Hackman, an intercollege 24 hour open-theme hackathon organised by the Department of Information Science and Engineering, Dayananda Sagar College of Engineering, Bangalore.

I have been a part of Hackman since the beginning. In the first Hackman, I was just a participant. I was in my second year and I did not know anything about the technologies that were trending. I was breaking my head with HTML and CSS and didn’t even know anything about the back-end systems and databases. But the things I learnt at the first Hackman were amazing. I was mesmerised and shocked to see so many things that were happening around me and I was completely unaware of it. I still remember Farhaan taking a Git session at Hackman and I walked out in the middle of it because I could not figure out what he was talking about. I regret doing that till date.

In the second version of Hackman, Hackman 2.0, I was in the core team organizing the event and I looked over the finances. This was the very first time I was a part of organizing something this big in college. But, with the help of my seniors and with sheer dedication, we were able to pull off a great event, even bigger than last time. Hackman 2.0 taught me how to deal with people which turned out to be a lot more difficult than dealing with computers. My seniors taught me a lot and helped me deal with the unexpected situations that occurred throughout the course of the event. Doing all this was very new for me and I enjoyed it a lot. Even this time, the technology mesmerised me, and I was left gaping at the things the contestants were doing. But, I realized that it was okay. It was just the lack of my knowledge that left me in a shock and I became more determined to learn new things and apply them. I was very happy being a part of Hackman 2.0 and when it ended, I was determined to organize the next Hackman bigger and better.

The time for the next Hackman, Hackman 2k18 came and I was ready for it. I had both the experience, of a participant and that of an organiser. I knew how to plan for the event and what to do when the plan fails. Throughout the preparation and the execution of the event, I had constant support of Farhaan Bukhsh, Abhinav Jha, Devesh Verma, Sudhanva MG, Ashish Pandey, Abhishek Agarwal, Goutham ML and all my seniors who are as much connected to Hackman as me. Some of the things I learnt in this Hackman was the amount of planning required in organizing the event is way too much. Getting sponsors for the event should be the first and foremost priority. We also had a staircase meeting at 5:00 am in the morning, during the event, where Farhaan, Abhinav and Saptak talked about the new things which could be done next year. It was amazing listening to the mentors about the ideas they had in mind for the next Hackmans to make it the greatest hackathon in Bangalore.

When I look back to the first Hackman, I still cannot believe how far I have come. From being a timid contestant who did not know anything about what to do in a hackathon, to handling the whole Hackman team as the Event Manager. The only thing that matters is how much are you willing to dedicate yourself towards something and how much are you willing to learn no matter what. This event taught me so many things in life, be it technical or non-technical that I will forever be grateful for it. I would love to see the next Hackman as bigger and better than what we organised and maybe someday it will be the biggest hackathon in Bangalore. #WeAreHackman

by Ashish Kumar Mishra at November 21, 2018 11:56 AM

October 29, 2018

Anu Kumari Gupta (ann)

Enjoy octobers with Hacktoberfest

I know what you are going to do this October. Scratching your head already? No, don’t do it because I will be explaining you in details all that you can do to make this october a remarkable one, by participating in Hacktoberfest.

Guessing what is the buzz of Hacktoberfest all around? 🤔

Hacktoberfest is like a festival celebrated by people of open source community, that runs throughout the month. It is the celebration of open source software, and welcomes everyone irrespective of the knowledge they have of open source to participate and make their contribution.

  • Hacktoberfest is open to everyone in our global community!
  • Five quality pull requests must be submitted to public GitHub repositories.
  • You can sign up anytime between October 1 and October 31.

<<<<Oh NO! STOP! Hacktoberfest site defines it all. Enough! Get me to the point.>>>>

Already had enough of the rules and regulations and still wondering what is it all about, why to do and how to get started? Welcome to the right place. This hacktoberfest is centering a lot around open source. What is it? Get your answer.

What is open source?

If you are stuck in the name of open source itself, don’t worry, it’s nothing other than the phrase ‘open source’ mean. Open source refers to the availability of source code of a project, work, software, etc to everyone so that others can see, modify changes to it that can be beneficial to the project, share it, download it for use. The main aim of doing so is to maintain transparency, collaborative participation, the overall development and maintenance of the work and it is highly used for its re-distributive nature. With open source, you can organize events and schedule your plans and host it onto an open source platform as well. And the changes that you make into other’s work is termed as contribution. The contribution do not necessarily have to be the core code. It can be anything you like- designing, organizing, documentation, projects of your liking, etc.

Why should I participate?

The reason you should is you get to learn, grow, and eventually develop skills. When you make your work public, it becomes helpful to you because others analyze your work and give you valuable feedback through comments and letting you know through issues. The kind of work you do makes you recognized among others. By participating in an active contribution, you also find mentors who can guide you through the project, that helps you in the long run.

And did I tell you, you get T-shirts for contributing? Hacktoberfest allows you to win a T-shirt by making at least 5 contributions. Maybe this is motivating enough to start, right? 😛 Time to enter into Open Source World.

How to enter into the open source world?

All you need is “Git” and understanding of how to use it. If you are a beginner and don’t know how to start or have difficulty in starting off, refer this “Hello Git” before moving further. The article shows the basic understanding of Git and how to push your code through Git to make it available to everyone. Understanding is much more essential, so take your time in going through it and understanding the concept. If you are good to go, you are now ready to make contribution to other’s work.

Steps to contribute:

Step 1; You should have a github account.

Refer to the post “Hello Git“, if you have not already. The idea there is the basic understanding of git workflow and creating your first repository (your own piece of work).

Step 2: Choose a project.

I know choosing a project is a bit confusing. It seems overwhelming at first, but trust me once you get the insights of working, you will feel proud of yourself. If you are a beginner, I would recommend you to first understand the process by making small changes like correcting mistakes in a README file or adding your name to the contributors list. As I already mention, not every contributions are into coding. Select whatever you like and you feel that you can make changes, which will improve the current piece of work.

There are numerous beginner friendly as well as cool projects that you will see labelled as hacktoberfest. Pick one of your choice. Once you are done with selecting a project, get into the project and follow the rest.

Step 3: Fork the project.

You will come across several similar posts where they will give instructions to you and what you need to perform to get to the objective, but most important is that you understand what you are doing and why you are doing. Here am I, to explain you, why exactly you need to perform these commands and what does these terms mean.

Fork means to create a copy of someone else’s repository and add it to your own github account. By forking, you are making a copy of the forked project for yourself to make changes into it. The reason why we are doing so, is that you would not might like to make changes to the main repository. The changes you make has to be with you until you finalize it to commit and let the owner of the project know about it.

You must be able to see the fork option somewhere at the top right.

screenshot-from-2018-10-29-22-10-36.png

Do you see the number beside it. These are the number of forks done to this repository. Click on the fork option and you see it forking as:

Screenshot from 2018-10-29 22-45-09

Notice the change in the URL. You will see it is added in your account. Now you have the copy of the project.

Step 4: Clone the repository

What cloning is? It is actually downloading the repository so that you make it available in your desktop to make changes. Now that you have the project in hand, you are ready to amend changes that you feel necessary. It is now on your desktop and you know how to edit with the help of necessary tools and application on your desktop.

“clone or download” written in green button shows you a link and another option to directly download.

If you have git installed on your machine, you can perform commands to clone it as:

git clone "copied url"

copied url is the url shown available to you for copying it.

Step 5: Create a branch.

Branching is like the several directory you have in your computer. Each branch has the different version of the changes you make. It is essential because you will be able to track the changes you made by creating branches.

To perform operation in your machine, all you need is change to the repository directory on your computer.

 cd  <project name>

Now create a branch using the git checkout command:

git checkout -b 

Branch name is the name given by you. It can be any name of your choice, but relatable.

Step 6: Make changes and commit

If you list all the files and subdirectories with the help of ls command, your next step is to find the file or directory in which you have to make the changes and do the necessary changes. For example. if you have to update the README file, you will need an editor to open the file and write onto it. After you are done updating, you are ready for the next step.

Step 7: Push changes

Now you would want these changes to be uploaded to the place from where it came. So, the phrase that is used is that you “push changes”. It is done because after the work i.e., the improvements to the project, you will be willing to let it be known to the owner or the creator of the project.

so to push changes, you perform as follows:

git push origin 

You can reference the URL easily (by default its origin). You can alternatively use any shortname in place of origin, but you have to use the same in the next step as well.

Step 8: Create a pull request

If you go to the repository on Github, you will see information about your updates and beside that you will see “Compare and pull request” option. This is the request made to the creator of the main project to look into your changes and merge it into the main project, if that is something the owner allows and wants to have. The owner of the project sees the changes you make and do the necessary patches as he/she feels right.

And you are done. Congratulations! 🎉

Not only this, you are always welcome to go through the issues list of a project and try to solve the problem, first by commenting and letting everyone know whatever idea you have to  solve the issue and once you are approved of the idea, you make contributions as above. You can make a pull request and reference it to the issue that you solved.

But, But, But… Why don’t you make your own issues on a working project and add a label of Hacktoberfest for others to solve?  You will amazed by the participation. You are the admin of your project. People will create issues and pull requests and you have to review them and merge them to your main project. Try it out!

I  hope you find it useful and you enjoyed doing it.

Happy Learning!

by anuGupta at October 29, 2018 08:20 PM

Kuntal Majumder (hellozee)

Another year, nice one

So apparently one of the oldest community in probably the whole India, celebrated its 2nd anniversary on 28th after it was revived back on 2016. And you know what, this time it was a Capture The Flag event, something new tried by the new group of people who joined hands to not let this community go in hibernation again.

October 29, 2018 07:28 AM

October 22, 2018

Jagannathan Tiruvallur Eachambadi

New Templates in Dolphin

I was using kio-gdrive to access my Google drive account using Dolphin (file manager). Since these are mounted as a virtual filesystem, I was not able to save files in them directly from libreoffice or any external program. So I thought creating a new document from dolphin and then editing an empty document would be easier to use. But information was scant regarding how to put together. I knew we needed a template which is just an empty file but I didn’t know how to put all this together to make it show up in Dolphin’s “Create New” context menu.

Steps to get it working

The example assumes creation of an empty document (ODT file). First create a template file by saving an empty document in ~/Templates. This is just a suggested directory but any place would be fine. As of kf5, the path for user templates is ~/.local/share/templates which can be got from kf5-config --path templates.

So in ~/.local/share/templates, create an application file like so

# ~/.local/share/templates/writer.desktop
[Desktop Entry]
Version=1.0
Name=Writer Document
Terminal=false
Icon=libreoffice-writer
Type=Link
URL=$HOME/Templates/Untitled 1.odt

After this Dolphin should pick this up and show this entry in the “Create new” menu. Context menu One has to take care to give proper extension to the files when naming them though since Google Docs won’t like files without extension although they can opened from Drive into Docs.

by Jagannathan Tiruvallur Eachambadi (jagannathante@gmail.com) at October 22, 2018 10:46 PM

October 16, 2018

Abdul Raheem (ABD)

Guest Session by warthog9(IRC Nick) And Emacs Sessions by shakti kannan(mbuf IRC Nick).

Hello, world!

Feeling really good to be back on writing my blog again after about 1 – 1.5 month. As I said in my previous one(blog) that I got busy with my college work and missed pyconindia and the chance to meet people to whom I was interacting at dgplug and also obviously chance to meet the mentors as well 😔 and am still busy with it but I thought I should make up some time for this and learning something which is better for my future not just literally byhearting each and every answer and writing in the exams and nothing more important than that and one thing I was really missing was Jason Braganza’s comments on how to improve my blogs 🙂 , So basically I have gone some 3-4 logs of emacs in that I got to know about many commands in it, so I will just give a clear clarification about what commands I got know and I cannot type each and every commands here it will become very lengthy blog and I don’t want to make it a lengthy one I will leave a link to the dgplug logs there you can find all the 10 blogs related to emacs from date 16-Aug-2018, I got to know about some basic commands like to open emacs from terminal type emacs -Q, In emacs if you want to copy some thing type c-y(c stands for “Control key”) and to move to end of the line type c-e and to move to beggining of the line type c-a and to move one sentence forward type M-e and for backward type M-a(M stands for “Alt key”) and to move forward one paragraph type M-} and for backward type M-{ and to save a file c-x c-w and to save a file type c-x c-f these were some of the basics commands that I remember again do check the dgplug logs to know more commands.

Buffer commands:
The next thing I got to know is buffer commands and everything is a buffer in Emacs. You could even have a chat on IRC channel or composing an email or writing code everything is a buffer. I will mention some which I remembered and again do check the logs for more information on everything. If you want to switch to another buffer type c-x b and you can come back again to the scratch buffer using c-x b and you can display the buffer using c-x c-b and you can close all other windows using c-x 1 and you can rename the buffer using M-x and you can save a buffer to a file using c-x s and you move the cursor to the next window using c-x o and many other buffer commands.

Window commands:
The next thing is window commands and again I will mention some of them do check the logs for more info, If you split a window horizontally type c-x 2 and if you want to delete current window not the buffer or file type c-x o and you can enlarge the window using c-x ^ and if you want to scroll the text in the other window type c-m-v and many other window commands.

Frame commands:
The next is frame commands, If you have a large screen you can open multiple GNU Emacs frames. You can create one using c-x 5 2 and you can move the cursor between frames using c-x 5 o and if you want to find a file in the new frame type c-x 5 f these were some basic frame commands and I have to through the rest logs and you can go through all of them as I have mentioned above with a link to all of those logs from day 1.

In between of this emacs sessions there was the guest session by warthog9(IRC Nick) I don’t know his name but it was really an interesting one (Link to that session). He gave some amazing suggestions and said one story which was also amazing :). These are some of the suggestions given by him when Kushal Das asked him to give to his students.

His first suggestion was obviously to get some virtualization software running somewhere, KVM/Qemu is free if you are comfortable with Linux.
If you have a mac or windows or Linux VMware has good options(Full disclosure: He works for Vmware).
but it is something interesting to try setting up and playing with owncloud/Nextcloud (doesn’t really matters which), squeezebox or another music jukebox kinda server do up a window file sharing setup(samba specifically).
once you have samba working, figure out how to export the same directories via nfsv4.
Setup some modern website with nginx or apache and you would even run containers to get it all working, but that could be a bit advanced, but it would be a good learning opportunity.
Once you have the above things ready, go play with collected and grafana and collect some interesting statistics and graphs from your other VM’s and seeing pretty graphs about how your machines are doing is always helpful.
Happy learning 🙂

by abdulraheemme at October 16, 2018 07:35 PM

October 09, 2018

Mohit Bansal (philomath)

Internet Relay Chat!

This blog post will cover the basics of IRC (Internet Relay Chat) and who should use it. It's been a long time since I first used IRC and it was not very pleasing experience at first. And now, I use IRC as my primary mode of communication. I won't be suprised if you haven't heard of IRC yet and even if you heard of it, never tried it. I know what are you thinking right now, "IRC, stupid, eh!".

by Abstract Learner (noreply@blogger.com) at October 09, 2018 04:48 PM

September 14, 2018

Kuntal Majumder (hellozee)

A Year Passes by

ILUGD better known as India Linux Users Group - Delhi, a LUG, based in Delhi NCR. Bear in mind that it is “India” and not “Indian”, a lot of people get that incorrect, “Indian” means it is exclusively for Indians which we are obviously not, but our group is based in Delhi which comes in India so, hashtag_blah_blah.

September 14, 2018 04:13 PM

Vaibhav Kaushik (homuncculus)

Custom Live Ubuntu

This blog is about making a live CD/DVD from the main system on your hard drive. This is useful if you want to build a clean live CD, or if you want to build a minimal rescue CD. We used it to create a beginner friendly wargame to introduce Linux to everyone. The theame was similar to that of Bandit with very elementary Linux commands and only 11 levels.

by Vaibhav Kaushik (vaibhavkaushik@disroot.org) at September 14, 2018 12:42 PM

September 12, 2018

Ratan Kulshreshtha

Workstation Setup Using Ansible

I use Fedora on my Dell Vostro 3560, and I have a habit of reinstalling fedora whenever new version of Fedora is released thus I have to install many things in my machine and configure many things again and again and sometimes I forgot something to install or sometimes I forgot to configure something so I asked myself is there a way to do this in a way which is immune to human errors or how can I automate all this ?

September 12, 2018 05:54 AM

September 10, 2018

Jaydeep Borkar(jaydeep)

Introduction to Natural Language Processing, Part 1.

Hello folks, I’ve just started my NLP journey and will be happy to share my learning process with you. Here’s an article regarding Introduction to Natural Language Processing.

The essence of Natural Language Processing lies in making computers understand our natural language. That’s not an easy task though. Computers can understand the structured form of data like spreadsheets and the tables in the database, but human languages, texts, and voices form an unstructured category of data, and it gets difficult for the computer to understand it, and there arises the need for Natural Language Processing.

There’s a lot of natural language data out there in various forms and it would get very easy if computers can understand and process that data. We can train the models in accordance with our expected output in different ways. Humans have been writing for thousands of years, there are a lot of literature pieces available, and it would be great if we make computers understand that. But the task is never going to be easy. There are various challenges floating out there like understanding the correct meaning of the sentence, correct Named-Entity Recognition(NER), correct prediction of various parts of speech, coreference resolution(the most challenging thing in my opinion).

Computers can’t truly understand the human language. If we feed enough data and train a model properly, it can distinguish and try categorizing various parts of speech(noun, verb, adjective, supporter, etc…) based on previously fed data and experiences. If it encounters a new word it tried making the nearest guess which can be embarrassingly wrong few times.

It’s very difficult for a computer to extract the exact meaning from a sentence. For an example – The boy radiated fire like vibes. The boy had a very motivating personality or he actually radiated fire? As you see over here, parsing English with a computer is going to be complicated.

There are various stages involved in training a model. Solving a complex problem in Machine Learning means building a pipeline. In simple terms, it means breaking a complex problem into a number of small problems, making models for each of them and then integrating these models. A similar thing is done in NLP. We can break down the process of understanding English for a model into a number of small pieces.

My friend recently went for diving at San Pedro island, so I’ll love to take that example. Have a look at this paragraph – “San Pedro is a town on the southern part of the island of Ambergris Caye in the Belize District of the nation of Belize, in Central America. According to 2015 mid-year estimates, the town has a population of about 16,444. It is the second-largest town in the Belize District and largest in the Belize Rural South constituency”.

(source-Wikipedia)

It would be really great if a computer could understand that San Pedro is an island in Belize district in Central America with a population of 16,444 and it is the second largest town in Belize. But to make the computer understand this, we need to teach computer very basic concepts of written language.

So let’s start by creating an NLP pipeline. It has various steps which will give us the desired output(maybe not in a few rare cases) at the end.

STEP 1: Sentence Segmentation

Breaking the piece of text in various sentences.

  1. San Pedro is a town on the southern part of the island of Ambergris Caye in the 2.Belize District of the nation of Belize, in Central America.
  2. According to 2015 mid-year estimates, the town has a population of about 16,444.
  3. It is the second-largest town in the Belize District and largest in the Belize Rural South constituency.

For coding a sentence segmentation model, we can consider splitting a sentence when it encounters any punctuation mark. But modern NLP pipelines have techniques to split even if the document isn’t formatted properly.

STEP 2: Word Tokenization

Breaking the sentence into individual words called as tokens. We can tokenize them whenever we encounter a space, we can train a model in that way. Even punctuations are considered as individual tokens as they have some meaning.
‘San Pedro’,’ is’, ’a’, ’town’ and so.

STEP 3: Predicting Parts of Speech for each token

Predicting whether the word is a noun, verb, adjective, adverb, pronoun, etc. This will help to understand what the sentence is talking about. This can be achieved by feeding the tokens( and the words around it) to a pre-trained part-of-speech classification model. This model was fed a lot of English words with various parts of speech tagged to them so that it classifies the similar words it encounters in future in various parts of speech. Again, the models don’t really understand the ‘sense’ of the words, it just classifies them on the basis of its previous experience. It’s pure statistics.

The process will look like this:
Input —>Part of speech classification model→ Output
Town                                                                      common noun
Is                                                                                   verb
The                                                                            determiner

And similarly, it will classify various tokens.

STEP 4: Lemmatization
Feeding the model with the root word.
For an example – There’s a Buffalo grazing in the field.
There are Buffaloes grazing in the field.
Here, both Buffalo and Buffaloes mean the same. But, the computer can confuse it as two different terms as it doesn’t know anything. So we have to teach the computer that both terms mean the same. We have to tell a computer that both sentences are talking about the same concept. So we need to find out the most basic form or root form or lemma of the word and feed it to the model accordingly.

In the similar fashion, we can use it for verbs too. ‘Play’ and ‘Playing’ should be considered as same.

STEP 5: Identifying stop words

There are various words in the English language that are used very frequently like ‘a’, ‘and’, ‘the’ etc. These words make a lot of noise while doing statistical analysis. We can take these words out. Some NLP pipelines will categorize these words as stop words, they will be filtered out while doing some statistical analysis. Definitely, they are needed to understand the dependency between various tokens to get the exact sense of the sentence. The list of stop words varies and depends on what kind of output are you expecting.

STEP 6.1: Dependency Parsing

This means finding out the relationship between the words in the sentence and how they are related to each other. We create a parse tree in dependency parsing, with root as the main verb in the sentence. If we talk about the first sentence in our example, then ‘is’ is the main verb and it will be the root of the parse tree. We can construct a parse tree of every sentence with one root word(main verb) associated with it. We can also identify the kind of relationship that exists between the two words. In our example, ‘San Pedro’ is the subject and ‘island’ is the attribute. Thus, the relationship between ‘San Pedro’ and ‘is’, and ‘island’ and ‘is’ can be established.

Just like we trained a Machine Learning model to identify various parts of speech, we can train a model to identify the dependency between words by feeding many words. It’s a complex task though. In 2016, Google released a new dependency parser Parsey McParseface which used a deep learning approach.

STEP 6.2: Finding Noun Phrases

We can group the words that represent the same idea. For example – It is the second-largest town in the Belize District and largest in the Belize Rural South constituency. Here, tokens ‘second’, ‘largest’ and ‘town’ can be grouped together as they together represent the same thing ‘Belize’. We can use the output of dependency parsing to combine such words. Whether to do this step or not completely depends on the end goal, but it’s always quick to do this if we don’t want much information about which words are adjective, rather focus on other important details.

STEP 7: Named Entity Recognition(NER)

  1. San Pedro is a town on the southern part of the island of Ambergris Caye in the 2. Belize District of the nation of Belize, in Central America.

Here, the NER maps the words with the real world places. The places that actually exist in the physical world. We can automatically extract the real world places present in the document using NLP.

If the above sentence is the input, NER will map it like this way:
San Pedro – Geographic Entity
Ambergris Caye – Geographic Entity
Belize – Geographic Entity
Central America – Geographic Entity

NER systems look for how a word is placed in a sentence and make use of other statistical models to identify what kind of word actually it is. For example – ‘Washington’ can be a geographical location as well as the last name of any person. A good NER system can identify this.

Kinds of objects that a typical NER system can tag:
People’s names.
Company names.
Geographical locations
Product names.
Date and time.
Amount of money.
Events.

STEP 8: Coreference Resolution:

San Pedro is a town on the southern part of the island of Ambergris Caye in the Belize District of the nation of Belize, in Central America.
According to 2015 mid-year estimates, the town has a population of about 16,444.
It is the second-largest town in the Belize District and largest in the Belize Rural South constituency.

Here, we know that ‘it’ in the sentence 6 stands for San Pedro, but for a computer, it isn’t possible to understand that both the tokens are same because it treats both the sentences as two different things while it’s processing them. Pronouns are used with a high frequency in English literature and it becomes difficult for a computer to understand that both things are same. Hence, this step is used. This step is indeed the most difficult step

In the upcoming articles, I’ll try sharing about the history of NLP, how it evolved, various past models and why they failed, NLP Libraries and coding NLP pipeline in Python. I’d love discussing various papers as well.

Please, feel free to correct me on any topic if I went wrong somewhere and do let me know about improvements.

by Jaydeep Borkar at September 10, 2018 01:24 PM

September 08, 2018

Prashant Sharma (gutsytechster)

Interactive Rebase

Git rebase has an interactive mode which helps you in rough times while working with git. You might come across situations where you have to alter what you’ve already committed. Interactive rebase provides us with tools and functions which helps us to do such things. Let’s do a quick recap of what rebasing is. By now we know that rebasing re-base our local commits on the top of commits done in the base branch.  And due to this the commit hash value changes and it act as a totally new commit with same changes as in earlier one. Hence, we shouldn’t rebase any public branch. That’s just to get a rough idea about rebasing. For more detailed insight you can refer here.

Rewriting the commit history

Interactive rebasing helps you to rewrite your commit history in case you find that you committed something wrong or some previous commit required more work to be done before committing. To tackle and get out of these problems, we have this magic wand in our hand. So, let’s know how can we use it.

Let’s create few files in our git repo(You know how to do it, right?). First create a file with name first.txt with the following content in it

This is the first file in this repository.

and then we add the changes to staging area. And finally commit it as

git commit -m "Add first.txt"

After this create another file with name second.txt with the following content in it

This is the second file in this repo.
This content needs to be deleted.
We'll do it later

And commit it as

git commit -m "Add third.txt"

Yeah, I know the file name is second.txt but I have done it intentionally. We’ll correct it later. Just keep reading. 🙂

Finally create the actual third.txt with the following content in it:

Yeah! this one is the last file.

and commit it as

git commit -m "Add actual third.txt"

At last, enough preparing. Now is the time to use our magic wand. As of now, our commit history looks like this

d6f128b Add actual third.txt
b2d48ff Add third.txt
e57559a Add first.txt

What? You also want to see your commit history like this instead of that long output of git log! Ok let me tell you then, I just used few options which git log supports

git log --pretty=format:"%h %s"

To get to know more about such options, do give a read to this.

Now we want to correct the content of our second file and then commit it with correct commit message. For that we can use git rebase as

git rebase -i HEAD~2

Ok. Let’s first understand what does this command mean?  For interactive mode we used -i option, short for –interactive. And then we give the number of commits we want to consider. As we know, HEAD always refers to the latest commit of current branch by default. So, to take 2 commits from HEAD(since the second commit from HEAD needs to be corrected), we used ~ symbol. When you’ll hit enter, a text editor would open with following content

pick b2d48ff Add third.txt
pick d6f128b Add actual third.txt

# Rebase e57559a..d6f128b onto e57559a (2 commands)
#
# Commands:
# p, pick = use commit
# r, reword = use commit, but edit the commit message
# e, edit = use commit, but stop for amending
# s, squash = use commit, but meld into previous commit
# f, fixup = like "squash", but discard this commit's log message
# x, exec = run command (the rest of the line) using shell
# d, drop = remove commit
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.
#
# Note that empty commits are commented out

The last two commits would be shown in older to newer fashion. Git is smart enough to explain you the various options which can be applied to each commit. By default each commit is marked with pick command. If you leave a commit with pick command, it just applies that commit as it is.

In our case we want to edit our first commit in the list, if you see the list of options available, can you guess which option should we use? Yup, you are correct, we would need to use the edit option. We can also use it’s short form ie ‘e‘ as

e b2d48ff Add third.txt
pick d6f128b Add actual third.txt
...

Now, as you’ll save the file, you would see a message as

Stopped at b2d48ff...  Add third.txt
You can amend the commit now, with

  git commit --amend 

Once you are satisfied with your changes, run

  git rebase --continue

So what exactly happened? Let’s understand it, rebase starts to take each commit one by one in the list and performs the command defined against them. In our case, the very first commit in the list comes up with the ‘e‘ option, which says rebase to stop at this commit until user tells it to continue. And as expected it stopped at the commit. Now if you do git log as

git log --pretty=oneline

and it would give you the output as

b2d48ff1ba706f9751bd950e17355a9fb9a3fd99 (HEAD) Add third.txt
e57559a96cd0c5b9f2ef4e2dccbcf6f20b3b11d4 Add first.txt

You can see that now our HEAD is at the commit at which rebase stopped. Now we can edit this file. So open second.txt and delete the last two lines. It would be now

This is the second file in this repo.

and then add the changes. Since its commit message was also incorrect, we can correct it now using the –amend option. With this option we can amend the commit refer by HEAD. Since our HEAD is at the second commit, we can now amend it. For that write the following command

git commit --amend

And hit enter, it would open a text editor with the previous incorrect commit message. So we can correct it now and save that file. Now our commit message has been amended. You can check that using git log command. Now do git status

interactive rebase in progress; onto e57559a
Last command done (1 command done):
edit b2d48ff Add third.txt
Next command to do (1 remaining command):
pick d6f128b Add actual third.txt
(use "git rebase --edit-todo" to view and edit)
You are currently editing a commit while rebasing branch 'master' on 'e57559a'.
(use "git commit --amend" to amend the current commit)
(use "git rebase --continue" once you are satisfied with your changes)

nothing to commit, working tree clean

git tells us that interactive rebase is in progress. It means that there are other commits yet to be processed and it also tells you the next command which is going to get execute. So simply do git rebase –continue to continue the interactive rebasing and then it would show you the message like

Successfully rebased and updated refs/heads/master.

Now again look at the git log, in my case it is

9259ca9 Add actual third.txt
7092e07 Add second.txt
e57559a Add first.txt

You might have noticed that the hash value of first two commits now has been changed. You know it, right? Yes, it’s because rebasing replays each commit and gives it a new hash value.

Now you know the procedure it follows. Let’s look at other available options rebase provides.

  • Squash

Indeed, squashing is one of the most used technique used by many contributors and anyone who works with git.  In simple words, squashing squashes more than one commit into a single commit. Let’s try it. Suppose we want to squash the first two commits in one in our above example. We’ll again give the same command for interactive mode. Can you write it without scrolling above? Nice. Now the text editor would open with following content

pick 7092e07 Add second.txt
pick 9259ca9 Add actual third.txt
...

You just need to write ‘s‘ option. write it as

pick 7092e07 Add second.txt
s 9259ca9 Add actual third.txt

And then save the file. As soon as you’ll save the file, another text editor would open. It would contain the commit messages of both the commits as git doesn’t know which commit message to take. Either you can keep any one of them or you can rewrite a new commit message. After writing the commit message, just save the file and it’s done. You just squashed your commits in one. Now my commit history looks like this.

b805b0a Add second.txt and third.txt
e57559a Add first.txt
  • fixup

If you see the options in rebase text editor, you’ll find this also. fixup is also used to squash the commits together. Then what’s the difference between the two? Well, it’s simple. fixup discards the commit message of the commit on which it is applied. Just this. You just have to use ‘f‘ option instead of ‘s‘ for it.

  • reword

Well, if you want to correct the commit message of the latest commit,  you can use –amend option. But if  you need to correct the commit message of previous commits, just use ‘r‘ option for whichever commit’s  message you want to correct. As one by one it would take commit and as soon as it reaches at the commit with option ‘r’, you would land into another text editor where you can correct it and save the file. And rebase would continue by itself.

  • drop

If you want to remove a commit from your commit history, then use ‘d‘ option short for drop and it would be removed. Do notice that the changes will remain there, just the commit would be removed from your history.

  • reorder

If you want to re-order your commits, then just change their position in the rebase text editor. And it’ll be done. So easy, isn’t it?

  • edit

We just used edit option in the initial example. It is used in case you want to stop at some particular commit and perform any action on that commit specifically.

These are the some of the most helpful functions while using git. You’ll most often come across using these. However, there are some facts which I’ve experienced and can confuse you if you go with default options.

  1.  git rebase -i ignores merge commits until and unless you use the flag ‘-p‘ abbreviated for –preserve-merges.  For more details, you can refer to this answer on stackoverflow.
  2. You can’t rebase the initial commit of your repository. So, in above example if you would try replacing HEAD~2 with HEAD~3 in order to get all the three commits in rebase it would give you the error as
fatal: Needed a single revision
invalid upstream 'HEAD~3'

Since the 3rd commit is the root commit of our repository. If you want to change the root commit of your repository and want to know about it then you can refer to this answer on stackoverflow.

And yes, I again want to focus on the part that rebasing changes your commit history. So, you should never rebase a public branch. That’s all from my side.

References:

I hope this would be helpful. I am still a newbie with it. So, if you find any correction or doubt then don’t hesitate to write in the comment sections below. Meet you soon.

Till then, be curious and keep learning!

by gutsytechster at September 08, 2018 07:06 PM

September 07, 2018

Jaydeep Borkar(jaydeep)

Volunteering for Kerala Flood Search and Rescue Team

A few days before August 18th, 2018, I came across a news that Kerala was suffering from the worst floods in nearly a century. As soon as I heard the news, I googled up to have a look at the situation. The situation was horrifying. Already, lakhs of people were stranded, displaced, missing and lives were lost. Floods amounting to a huge destruction to the state’s property. I really felt sad about the situation. But, the thing that pricked me the most was that I was unable to do anything to stop this or to protect people and save their lives. I couldn’t do anything on my part apart from sharing posts on social media to donate the fund. 1,143 kilometers away, I spent quite a lot of time thinking on how I can help them. Neither was I in Navy nor in any Local Rescue Team. What is the use of studying these much, getting a degree, equipping yourself with a lot of knowledge, if you can’t help those in need, those who are on the verge of losing their lives? I just thought for a moment. Still, I was clueless regarding how I can help them.

On the morning of 18th of August, I got to know that Kerala Rescue Team needed developers who can keep their website running. I joined their slack group as soon as possible, but I found that the development stuff would require me to learn that particular technology first and then I could be in a position to contribute, and this would considerably take a lot of time. It was pretty dilemmatic. But, as I was exploring various other modes of contribution, I found out this Chat Support Volunteering.

The functionality was like this: relatives and friends of the people who were stranded in the various districts of Kerala without food, water, and other necessary things contacted our chat support team with the help of a portal. Our job was to extract all the necessary information from them such as the number of people stranded, their contact numbers, names, location and coordinates(longitude, latitude), their condition. Collecting the information and checking it in the database for the duplicate entries and to file the entry if it was a new case. Followed up by contacting the local rescue team, the authorities and the Navy carrying out rescue operations, by providing them with the already extracted information. We were provided with the details of various district wise helpline numbers, rescue team contact numbers, details of various relief camps and the updated data of the people in the relief camps.

The other chat support volunteers were really very helpful and kind. They helped me and guided me throughout the operation. For an instance, I don’t understand Malayalam, so they helped me in explaining what the requester was requesting with all the necessary data. I tried the hardest to respond to all the rescue requests that I could with, taking all the required information, assuring them that they would get in touch with their loved ones sooner. I was very clear regarding my role to unite a child with his/her mother, to unite a family, to unite friends, and to save their lives. I wasn’t an inch less and tried my best. Skipped my lunch because I have an extremely slow rate of transferring food particles per second to my digestive system and that would have taken a much of time, and I couldn’t afford to miss on any of the rescue requests. Definitely, I had some snacks later in the evening. The real heroes were the Navy and the local Rescue team.

It was very hurtful to hear that people were losing their lives. I called a group of stranded people in one of the districts to assure them that the help was arriving, and got to know that few people had lost their lives over there. They were in a pretty bad mental and physical condition. Tried my level best to help the people by responding to the requests with the tremendous help of other chat support volunteers, they were the people with unmatchable efforts. We were available 24×7 on the helpline. We had divided the slots. I went offline at 1am and came back again by 5am, with the other volunteers assisting the team.

A lot of people who found their missing ones dropped touching messages to me,  the next day. A woman had a relative missing since 3-4 days. I took her email and assured her that I’ll get in touch with her as soon as I find that person. That day,  unfortunately, we couldn’t find that person, so I dropped her an email regarding this and assured her that we were trying our best.  The next day, one of the volunteers dropped me a message indicating the similar person on Google Finder. Yes, it was that same person. I dropped her an email regarding this update and yes, she got really very emotional and happy, and dropped a very touching email to me, thanking our team. A person whose friends went on missing from Ernakulam district contacted me on WhatsApp after he found them, leaving a very touching message again. This was something to live for.

With the help of our tireless Chat Support Volunteering Team, a lot of people were rescued, a lot of lives were saved. The members of chat support and other teams were so dedicated that they spent day and night to rescue the people. Unfortunately, I couldn’t help for more than three days as the deadlines of my assignments were marching nearer. People from various parts of India, and even from other countries were helping out the rescue team in different ways.

It had never really felt so good to me in my life yet, that we had saved the lives of people by coordinating with various teams. I met highly dedicated, motivated, hardworking and humble volunteers and made good friends too. There’s a very big and different world outside the college life, the world that is built on empathy and good people, is what I learned. There’s much more to life than what our system restricts us to. Well, Kerala is on its track to be a powerful state again, it’s recovering, and I’m super happy about that.

A huge respect for the team of volunteers who worked day and night behind this for weeks and months in various fields, the local rescue teams, and the Navy. I could contribute for only three days due to my assignment deadlines, but other volunteers contributed for a really long period, even after floods, to make Kerala strong again. I have a great respect for these guys.

I could go on and on writing about this, so to wrap it up: for me, It’s a great day to be alive!

Cheers!

 

 

by Jaydeep Borkar at September 07, 2018 03:21 PM

August 30, 2018

Ashish Kumar Mishra (ash_mishra)

DevConf.IN

DevConf.in was organized by RedHat in Bangalore on August 4th and 5th. This was my first ever conference and I was very excited to attend it.

The day finally arrived and I was ready for it. I had planned my schedule and knew which all events I had to attend. When I reached Christ University (the place where the event was going on), I was late. I missed the opening keynote but was just in time to attend the sessions. Turns out I didn’t even attend half of the sessions from my list.

You must be wondering, ‘why?’. Well, I realized on the first day itself that conferences are a place to meet people rather than sitting and coding. The experience that you gain by meeting different people of the open source community is amazing and fun. So, I met a lot of new people, talked to them, roamed here and there with them. The best part was I that I was a student and the people whom I talked with were professionals. It helps to know where the industry is headed nowadays, what are the current trends in technology, how people became successful, where did they start, what are the various career options, etc.

Moreover, the goodies that we got from Devconf also made me happy. I always wanted badges and stickers related to computer technologies and they were present in abundance.

We had a #dgplug staircase meet where I met many people whom I just knew by their IRC nicks. Sayan and Saptak talked about the conference, they shared their experience and then Sayan clicked our photos too.

4th and 5th of August were one of the best weekends I have spent till date. I enjoyed and learnt both at the same time. I am looking forward too attend more conferences and meet more people.

 

by Ashish Kumar Mishra at August 30, 2018 08:12 PM

August 28, 2018

Prashant Sharma (gutsytechster)

Git Rebase

Heya folks!

Git is indeed an ocean of topics to dive in and one can learn these only if you use them and experience them. So, here I am with another of my learning. If you work for a project, there are probable chances that you come across using rebasing. So what exactly is rebasing? Let’s know about it.

In open source contributions, one follows certain rules and regulations while contributing. Each organization has these rules defined in specific file, most of the time, you would see something similar as CONTRIBUTING.md. So, more often you’ll be asked to work on a separate branch for each issue/feature you are working on. This branch is called as feature branch. Since you are working on different branch, you would want to keep your branch updated with master of upstream. So, do you know how would you do it?

Well of course you know it. You would simply do `git pull upstream master` and your branch will be up-to-date with master. However, this will result in superfluous merge commit that would intervene your commit history. You might not want to clutter your commit history with these merge commits each time you do git pull. So to avoid this we have a wonderful tool ie rebasing.

What does rebasing do?

Even if you mean it literally, you can tell its purpose. It re-base the commits of current branch on the top of other branch’s. Did it confuse you? No problem. Let take an example. So suppose you are working on a project which has the following commit history :

A-B-C-D (master)

And then you created a feature branch and did few commits on it, that would be like:

A-B-C-D (master)
       \
        E (feature)

However, during the period you were working on your feature branch, the work on master branch proceeded and there have been few more commits on master branch and in your feature branch. And now your commit history looks like this:

A-B-C-D-H-I-J-K-L(master)
       \
        E-F-G (feature)

But now you need those commits of master to get up-to-date in your feature branch. So, what would you do? Indeed we can do rebasing. But what if we do simple merge? Let’s see

So for merge you would enter the git command as:

git checkout feature
git merge master

And then your commit history would be like this:

A-B-C-D-H-I-J-K-L(master)
       \       /
        E-F-G-M (feature)

Since you merged the master branch into feature branch. The commit M has got created which is a merge commit. It would contain the changes in the master branch and would add it to your feature branch.

There is one thing to notice here which is that there is no effect/change on the previous commits of any of the branch.

Finally, what would have happened if we had done git rebase instead of git merge. Let’s see that also. So for rebasing you would enter the git command as:

git checkout feature
git rebase master

Now is the time where you should know how rebase works. It re-base ie change the base of your feature branch on the top of the HEAD of master branch. What’s the base of your feature branch now? Yup! it’s the commit D. After rebasing it would shift to the commit L. It would look like you had implemented your work on the top of what everybody has already done. During rebasing following steps are taken:

  • It finds out the common commit of your current branch(feature in our case) and the base branch(master in our case) which is the commit D(here).
  • It then collects all the commits between that common commit(D) and the HEAD commit(G) of current branch. And put those commits aside. In our case these commits are E, F and G.
  • It shifts the base from that common commit to the HEAD of the base branch(L in our case).
  • It re-plays each change which were set aside, on the top of new base and creates a new commit.

Now, the base of your current branch ie feature branch is the HEAD of master branch. You might have noticed those bold words in the last step. That was intentionally done to point out that all the commits of current branch from the base commit to the HEAD will have a different commit hash value. As because they have been re-committed after the base got changed. This was all done during the rebasing process.

After rebasing your commit history would look like this:

A-B-C-D-H-I-J-K-L(master)
                 \
                  E'-F'-G' (feature)

Now your base has changed from commit D to commit L.

E’, F’, G’ refers to the new commits which have been done during rebase. Each of them corresponds to the changes that have been done to commits E, F, G respectively.

While doing merging, we usually come across merge conflicts. Merge conflicts arise when there is a change in same file and in same line in two different branches. Git doesn’t know that while merging which change should it consider. So, it is our responsibility to first solve that conflict and then again try merging.

Since rebasing is actually a type of merging just the difference is that first it changes your base and then apply the changes of current branch. So, there is again chances that there could be some conflicts. Rebasing stops at the moment where it finds any conflict and asks you to resolve the conflict. Though after resolving the conflict, you don’t have to start rebasing again as it is already in progress. It just stops momentarily until you resolve the conflict. Once you are done with resolving, you can continue rebasing using the command:

git rebase --continue

And then it would apply the remaining commits. However, if at any moment you feel like you don’t want to continue with the rebase, you can abort the rebasing using

git rebase --abort

You would be at same stage as you were before starting rebasing.

When not to use rebase?

Even though there are advantages of rebasing, it can be troublesome sometimes. You should always remember not to rebase a public branch. It means that you shouldn’t try to rebase a branch that other people might have forked. As rebasing changes your commit history. That could lead to confusion and conflicts. I again say Do not ever rebase public branch.

However, rebasing private branches is totally fine and often recommended.

What’s ahead?

So, now you know rebasing. Well it’s not yet done. There are a whole lot of things you can do with it. We’ll cover it in another blog post.

References:

I have gone through many links and posts to learn rebasing. So, I would suggest you to go through the same to get more insight about it.

Goodbye for now. Hope it was helpful.

Till next time, be curious and keep learning!

by gutsytechster at August 28, 2018 05:13 PM

August 25, 2018

Mayank Singhal (storymode7)

Dvorak Experience

It’s going to be a month from the time I started learning Dvorak keyboard layout. I just wanted to find out on my own what made it different? What better way to find out than learning it?

Qwerty has been the layout I’ve used ever since I came across computers. But recently when I came to know about some of Dvorak’s benefit, I was tempted to try it out.  Or if you straight away want to jump to why Dvorak’s good, head on to this section. Small research

Converting Qwerty to Dvorak

Since I use a keyboard with a Qwerty layout, the major hurdle was to turn it into Dvorak. I didn’t know how it would be done, but I knew that it would be possible for sure. I knew that it was possible to exchange keys (I recently swapped the Caps Lock key with the escape key. It made vi experience blazing fast!). So if need be, I was ready to write a script to map my qwerty keys to Dvorak one by one. But thankfully it didn’t come to this.
Recently one of my friends shifted to Fedora. He was trying out a live USB and couldn’t get the command the command working that he used to remap keyboard keys on ubuntu. So while trying to help him out I read the man pages of the few alternatives that he wanted me to check if they work working on my dual boot install of Fedora.

One of them was setxkbmap. That exactly served my purpose.

All I had to do was write one command and voila, I had Dvorak layout. To be on the safer side lets know about our current layout so that we can switch back without issues.

By default, you would be using the us layout. To check you can type:

setxkbmap -query

In the short output, you can see the layout defined.

This is your key to home. If you lose it, you’d have fun typing out weird qwerty characters even to say qwerty 😛

Let’s get a short demo. First, we need some keys that don’t change with the keyboard layout. Like the arrow keys. And since we are trying out commands at the terminal, it would serve us perfectly.
So go ahead and create your safe home. The default layout in the command history would be one for us.
Type:

setxkbmap us

Don’t expect anything to happen, since you are reapplying the default layout. Now if we mess up, we can just press the UP arrow key and go back up and hit enter to get the cozy qwerty back.
If you feel like doing it, then type out:

setxkbmap Dvorak

Now try typing qwerty at your terminal. When I did this first time I couldn’t help but laugh at how simple and weird it was that your keyboard has gone haywire.
NOTE: Don’t let your PC sleep. As this layout is set for your account now, so it would be applied to every window that requires input. Even your login screen!
But if you were too excited and ended up doing that anyway, just search for Dvorak on any other device and try to type the key combination back, one key at a time.

Though this sufficed my necessity, I knew that a luxury was possible too.

Switching layouts on the fly!

Now let us write a small bash script to toggle layout. It will change to Dvorak if existing layout is us and vice versa.

For checking existing layout, we already used setxkbmap -query,
we’ll be using that to check the existing layout. I used wildcards to
check for the substring Dvorak in the output of setxkbmap query`

#!/usr/bin/env bash
if [[ $(setxkbmap -query) = *"dvorak"*  ]]
then
    setxkbmap us
else
    setxkbmap Dvorak
fi

This is the script I wrote.

I saved it as dvorak_toggle.sh. Now after making it executable, I can do ./dvorak_toggle.sh

I was happy with this for a minute, but quickly I realized that more luxury is possible 😛 So, next what I did was to map this script to a shortcut.
In Fedora, it’s pretty simple. Go to settings -> keyboard and just create a new shortcut.

In the command field, add the path to the script. (Since there is a shebang already, we don’t need bash path/to/script as a command)
I mapped the script to Alt + Enter. And voila in a second now I can switch from one layout to another!

Also, I found it helpful to keep an image of Dvorak layout saved offline. So that I can refer to it using GUI if I forget which key is which.

An FAQ and some suggestions

You don’t need to unlearn qwerty! I can type Dvorak and qwerty both above 40wpm easily. In fact, if you’ve prior touch-typing experience, it’d only help you!

Is it really fast?

It is definitely fast. Within 2 weeks approx, I could type with decent accuracy and about 30 wpm speed (I know that at qwerty many folks would have been faster within that interval of time, but it took me around a month to get my fingers at place correctly on qwerty 🙂 I’m a slow learner. YMMV). And some words I could type faster than qwerty even though it had been only two weeks.

Does it change the position of only the alphabet keys?

Nopes. It changes punctuation marks and some symbols as well. And more used punctuation marks like , and . are placed on the top row in place of q and w of qwerty. Some symbols like / and + are also shifted to the top row in place of [ and ]. Similarly a few other changes.

Why was qwerty designed like this?

Alphabetical typewriters had a problem in earlier times that while typing fast the rods attached to the keys would stick to each other. Qwerty dealt with this problem by spreading the keys so that clash was less often.
But this was in 1873!  And now we don’t use typewriters that would have clashing rods, rather these spaced out keys only strain our hands, Dvorak tries to deal with this.

Would my speeds at qwerty become slow?

Short answer, yes (If you stop qwerty absolutely) and no (If you do it even once in a week too).

Switching between the layouts isn’t physically difficult. Like if you practiced for 15 min on Dvorak you can get your original qwerty speed back on qwerty in no time. It’s what I realized because I used to do 1-2 lessons of Dvorak in a day and then after finishing them I’d immediately test my performance on qwerty. On some days I even dropped to a 20 wpm on qwerty after practicing Dvorak for long. But, on some days I could type qwerty as fast as I used to.

According to me, once you reach a speed of 50+wpm, speed is more of a mental construct. It is just how much you can focus. How clear your mind can get. Like I said after Dvorak, on qwerty I’d get 20wpm to 60wpm. At times the 20 wpm was followed by the 60! The reason is that initially, I’d think before typing. But when I stopped thinking how to type and just let my fingers roll over the keyboard, they would end up making the words I’d intend them to. I think that Dvorak was on my mind whereas qwerty on my muscles.

But would qwerty practicing slow my Dvorak typing speeds?

Yes, If you don’t practice it daily, as it’s new for your muscles, it’d take some time. When I started with Dvorak I had so slow speeds that I couldn’t keep it ON the whole day. Also, due to qwerty being my primary layout, most of the mistakes I did in gtypist were of typing qwerty keys in Dvorak layout (and a few days later, of typing Dvorak keys in qwerty XD). Since I could type well in qwerty, I used to switch to it usually. But when my speed was around 30 on gtypist, I started taking Dvorak out for a spin. Even at typing races with some fellows!
All you need for the starters is to get the layout habitual to your muscles.

If you’ve never touch typed and are learning Dvorak as your first keyboard you’d get it more easily. But your fingers might be a little hesitant for a few days.

Don’t type letters. Type words.

While practicing touch typing we often get in the habit of keeping a letter in our mind and then thinking where it is and finally pressing it without seeing it. Typing the letters is another thing that would make you slow despite knowing all the key positions. Instead, practice typing a word in such a manner that you don’t focus on individual letters anymore.

This is similar to stenographers’ use of steno type keyboard. They write a syllable, word or even a phrase with a single stroke!

This is the reason behind the suggestion of practicing one keyboard everywhere. I.e not only on a typing lesson but also in chats, in browsing, etc. So that you get the habit of typing words rather than letters.

Keeping the keyboard in the correct position

Try to keep keyboard such that your wrists don’t bend while trying to type. Keep your wrists flat and shoulders relaxed. I’ve noticed not only increase in typing comfort but also an increase in typing speed when the keyboard position is right.

Shoulders and hands relaxed

When trying to type fast, I used to tense up my muscles and that would eventually slow me down. If you rest your fingers rather than keeping them tensed you’ll again notice an increase in comfort and typing speed.

Don’t pound on the keys

You might have already read this somewhere. I too have! But even to this day I unintentionally end up pounding the keys. The secret is to press the keys gently (not slowly) enough just to register the touch. This helps you type fast and also keeps your keyboard in good health as compared to playing Whac-a-mole with your keyboard.

But what if I switch to Dvorak and I need to use someone else’s qwerty keyboard?

Well well, this was amongst the two reasons I was apprehensive about switching to Dvorak completely. (Other reason being that I’d lose the speed that I had at qwerty)

Then I realized that I don’t use somebody else’s keyboard that often.
Another major point against it is that, if you can type 60+ on your keyboard, there’s no surety that you would have the same speed on someone else’s keyboard.

Currently, I am a student, so the only other keyboard that I’ve to interact with too often is the lab’s old qwerty. And those keyboards are generally so old and their keys are so uncomfortable that I’ve to pound most of the times. But keeping these extreme aside, even if I type on a friend’s brand new laptop, I can’t get the same speed that my keyboard gives me. But again this is what I experienced and your mileage may vary 🙂

So what layout should I learn?

I strongly feel that Dvorak is amazing and this should be the layout you should learn if you ever want to go at really high speeds or just want a more comfortable experience for your fingers.
But on the other hand, knowing little qwerty help.

Can qwerty be bad for my wrists and fingers?

Short answer: yes!

Here’s a short experience of mine 🙂

A few days back I would have said no. Since at that time I used qwerty only on gtypist. But now I’ve used pure qwerty for some days. Today, to test how good my qwerty remained, I did a typing test. At first, I couldn’t get good speed. I thought it’d improve within 2nd or 3rd trial as it had been, but to my surprise, it took my mind around 25-30 min to get back qwerty. But as soon as I could type 30, I could go 50 easily. After all, it’s my mind that needed to come back. As soon as it did, muscles kicked into action. And once again I retained my speed on both the layouts.

BUT the thing that immediately bugged me about qwerty when I tried my hands on it after some time, was how awkward the keys have been laid. I’ve never thought so my entire life! My wrists started to pain slightly. And I realized that it may be because that in qwerty, many keys in the bottom row are frequently used and so are some at the top row. And that felt so uncomfortable that I knew I’ll be staying at Dvorak now and will do qwerty only once a week.

Are shortcuts awkward with Dvorak?

Nah. On the contrary, I’ve come to find Dvorak great in that domain too! Considering the fact that it takes you away from vim and its shortcuts, it is only better. I’ve even switched my editor to emacs and I’ve been at it for a few weeks now! I can’t bear vim with Dvorak. Though it is amazing for qwerty, emacs is more natural for Dvorak in my opinion.
And for the most common shortcut, Ctrl-c & Ctrl-v, Dvorak is amazing. Both C and V are at comfortable positions of the right hand when you press the Ctrl with your left.

But like everything, you find the thing you are habitual of more comfortable than a new thing. So it might be some time before you find the layout that suits. Though if you spend more time with a personal setup, then Dvorak deserves a try.

Touch type if you type. And go Dvorak if you touch type.

storymode7

–Touch typed in Dvorak!

by storymode7 at August 25, 2018 07:55 PM

August 21, 2018

Kumar Vipin Yadav (kvy)

STRUCTURE IN C

STRUCTURE:
You are aware that a variable stores a single value of a data type. Arrays can store many values of
similar data type. Data in the array is of the same composition in nature as far as type is concerned.
In real life we need to have different data type for example to maintain employees information we should
have information such as name,age,qualification,salary etc.
Here,to maintain the information of employees
dissimilar data types are required. Name and qualification of the employee are char data type, age is
integer,and salary is float. All these data types can not be expressed in a single array. One may think
to declare different arrays for each data type. But there will be huge increase in source codes of the
program. Hence, arrays can not be useful here. For tackling such mixed data types, a special feature is
provided by C.it is known as structure.
A structure is a collection of one or more variables of different data types, grouped together under a
single name. By using structures we can make a group of variables,arrays,pointers etc.

Features of structures:

1. To copy elements of one array to another array of same data type elements are copied one by one.
it is not possible to copy all the elements at a time. Whereas in structure it is possible to copy the
contents of all structure elements of different data types to another structure variable of its type using
assignment (=) operator. It is possible because the structure elements are stored in successive memory
locations.

2. Nesting of structures is possible i.e. one can create structure within structure. Using this feature
one can handle complex data types.

3. It is also possible to pass structure elements to a function. This is similar to passing an
ordinary variable to a function. One can pass individual structure elements or entire structure by
value or address.

4. It is also possible to create structure pointers. We can create a pointer pointing to structure elements.
For this it requires operator.

creating structure definition:

   syn:   struct Name_of_structure 
          {
             member1;
             member2;
             member3;

          };
   e.g.
       struct student                          struct employee                        struct test
          {  int roll;                              { int empno;                          {  int a,b,c;
             char name[100],fname[100];               char name[100],address[100];           float x,y,z;
             char address[100],inst[100];             char dept[100],post[100];              char p[9];
             char sub[100];                           float basic;                         };
             float fee;                             };                              
           };

Another way of creating structure variable :

      struct student                                    struct
    {                                                 {
      int roll;                                           int roll;
      char name[100];                                     char name[100];
      char sub[100];                                      char sub[100];
      float fee;                                          float fee;
    }A,B,C;                                           }A,B,C;

USING STRUCTURE VARIABLE :

#define z 50
struct employee
{
    int ID;
    char name[z];
    char dept[z];
    float sailry;
};
int main()
{
    struct employee A;
    char Escape_NULL;

    printf("Enter Employee ID : ");
    scanf("%d",&A.ID);
    scanf("%c",&Escape_NULL);

    printf("Enter Employee name : ");
    gets(A.name);

    printf("Enter sailry : ");
    scanf("%f",&A.sailry);
    scanf("%c",&Escape_NULL);

    printf("Enter Department of Employee : ");
    gets(A.dept);

    printf("Name : %s. \n",A.name);
    printf("ID : %d.\n",A.ID);
    printf("Department : %s.\n",A.dept);
    printf("Sailry : %.3f.",A.sailry);

    return 0;
}

Output:-

Enter Employee ID : 10001
Enter Employee name : Vipin
Enter sailry : 18000.69;
Enter Department of Employee : IT
Name : Vipin. 
ID : 10001.
Department : IT.
Sailry : 18000.690.

(.) Dot operator

This operator is referred as structure variable structure member operator it is used to access a,
structure member from a structure variable.
syn: structure_name new_name;
e.g.

1.                                          2.                                    3.
                                                                                  Nameless structure
                                                                                  Using typedef
  typedef struct student                    typedef struct employee               typedef struct
    {   int roll;                              { int empno;                          {  int a,b,c;
        char name[100],fname[100];               char name[100],address[100];           float x,y,z;             char address[100],inst[100];             char dept[100],post[100];              char p[9];
        char sub[100];                           float basic;                         }test;
        float fee;                             }emp;                              
    };
    typedef struct student std;

Array of structure :

We can create array of our structure , Here in our this example we will take employee detail,
and print them.

#define z 50
struct employee
{
    int ID;
    char name[z];
    char dept[z];
    float sailry;
};
typedef struct employee emp;
int main()
{
    struct employee A[5];
    char Escape_NULL;
    int i;
    for ( i = 0 ; i < 5 ; i++ )
    {
        printf("Enter Employee ID : ");
        scanf("%d",&A[i].ID);
        scanf("%c",&Escape_NULL);

        printf("Enter Employee name : ");
        scanf("%s",A[i].name);

        printf("Enter sailry : ");
        scanf("%f",&A[i].sailry);
        scanf("%c",&Escape_NULL);

        printf("Enter Department of Employee : ");
        scanf("%s",&A[i].dept);

    }

    for ( i = 0 ; i < 5 ; i++ )
    {
        printf("Name : %s.\nID : %d.\nSailry : %f.\nDepartment : %s.",A[i].name,A[i].ID,A[i].sailry,A[i].dept);
        printf("\n\n");
    }
    return 0;
}

Output:-

Enter Employee ID : 1001
Enter Employee name : Vipin
Enter sailry : 18200.69
Enter Department of Employee : IT
Enter Employee ID : 1002 
Enter Employee name : Yatender
Enter sailry : 18000
Enter Department of Employee : Sales
Enter Employee ID : 1003
Enter Employee name : Harchand
Enter sailry : 17000
Enter Department of Employee : Markcketing            
Enter Employee ID : 1004
Enter Employee name : XYZ
Enter sailry : 0000
Enter Department of Employee : abc
Enter Employee ID : 1005
Enter Employee name : ABC
Enter sailry : 0000
Enter Department of Employee : XYZ

Name : Vipin.
ID : 1001.
Sailry : 18200.689453.
Department : IT.

Name : Yatender.
ID : 1002.
Sailry : 18000.000000.
Department : Sales.

Name : Harchand.
ID : 1003.
Sailry : 17000.000000.
Department : Markcketing.

Name : XYZ.
ID : 1004.
Sailry : 0.000000.
Department : abc.

Name : ABC.
ID : 1005.
Sailry : 0.000000.
Department : XYZ.

In our this example we will take marks of student as input and display names of student who got,
above 85% marks.

#define z 50
struct Student
{
    char name[z];
    float Math;
    float Physics;
    float Chemistry;
};
typedef struct Student std;
int main()
{
    std A[5];
    int i;
    float P;
    for ( i = 0 ; i < 5 ; i++ )
    {
        printf("Enter Student Name :");
        scanf("%s",A[i].name);

        printf("Enter marks of Math for %s :",A[i].name);
        scanf("%f",&A[i].Math);

        printf("Enter marks of Physics for %s :",A[i].name);
        scanf("%f",&A[i].Physics);

        printf("Enter marks of Chemistry for %s :",A[i].name);
        scanf("%f",&A[i].Chemistry);

        printf("\n");
    }

    for ( i = 0 ; i  85.00 )
            printf("%s Has %.2f%%.\n",A[i].name,P);
    }
    return 0;
}

Output:-

Enter marks of Math for Vipin :85
Enter marks of Physics for Vipin :86
Enter marks of Chemistry for Vipin :87

Enter Student Name :Nitin
Enter marks of Math for Nitin :88
Enter marks of Physics for Nitin :45
Enter marks of Chemistry for Nitin :67

Enter Student Name :Harchand
Enter marks of Math for Harchand :43
Enter marks of Physics for Harchand :45
Enter marks of Chemistry for Harchand :67

Enter Student Name :Yatender
Enter marks of Math for Yatender :56
Enter marks of Physics for Yatender :43
Enter marks of Chemistry for Yatender :88

Enter Student Name :KVY
Enter marks of Math for KVY :90
Enter marks of Physics for KVY :98
Enter marks of Chemistry for KVY :99

Vipin Has 86.00%.
KVY Has 95.66%.

POINTER TO STRUCTURE :

We know that pointer is a variable that holds the address of another data variable mean of any data type
that is int, float and double. In the same way we can also define pointer to structure .
here starting address of the member variable can be accessed. Thus search pointers structure pointer.

  struct    Name_of_structure    *ptrname;

e.g.

#define z 50
struct Student
{
    char name[z];
    float Math;
    float Physics;
    float Chemistry;
};
typedef struct Student std;
int main()
{
    std A = {"Vipin", 78, 87, 98};
    std *p;
    
    p = &A;

    printf("%s has %.1f mark in Math %.1f marks in Physics and %.1f marks in Chemistry.",(*p).name,(*p).Physics,(*p).Chemistry);
    // Using of () is coumpulsurry you can use -> (Arrow Operatore) insted of (*)
    
    return 0;
}

Output:-

Vipin has 87.0 mark in Math 98.0 marks in Physics and 0.0 marks in Chemistry.

ARROW OPERATORE :

This operator is used to access a structure number through a structure pointer variable,
It also known as structure pointer to member operator.

#include<>
#define z 50
struct Student
{
    char name[z];
    int Math;
    int Physics;
    int Chemistry;
};
typedef struct Student std;
int main()
{
    std A = {"Vipin", 78, 87, 98};
    std *p;
    
    p = &A;

    printf("Name : %s\n",p->name);
    printf("Marks in Math : %d.\n",p->Math);
    printf("Marks in Physics : %d.\n",p->Physics);
    printf("Marks in Chemistry : %d.\n",p->Chemistry);

    return 0;
}

Output:-

Name : Vipin
Marks in Math : 78.
Marks in Physics : 87.
Marks in Chemistry : 98.

STRUCTURE AND FUNCTION :

Like members of standered data types, structure variables can be passed to the
function by value or address.
The Example is given Below.

#define z 50
struct complex
{
    int real,img;
};
typedef struct complex complex;
int main()
{
    complex A;
    
    void input( complex * );
    void output( complex );

    input(&A);
    output(A);

    return 0;
}
void input( complex *x )
{
    printf("Enter value of numerator : ");
    scanf("%d",&x->real);

    printf("Enter value of Denominator : ");
    scanf("%d",&x->img);
}

void output( complex x )
{
    printf("%d/%d",x.real,x.img);
}

Output:-

Enter value of numerator : 12
Enter value of Denominator : 45
12/45

In our next blog we will look forward union in C Language. 🙂

by kumar vipin yadav at August 21, 2018 07:29 PM

August 19, 2018

Mayank Singhal (storymode7)

Typing without looking at the keyboard?

Seeing someone typing while looking at the screen seems fascinating, no? It is like the person typing has learned the language that computers speak.

I used to be awestruck when I saw someone type like this. In wonder, I’d look at the person typing, his eyes focused on the screen, and then at his fingers, moving wildly in every direction forming words magically somehow. Later, I came to know that this magical skill is called touch typing.

Before touch typing, I used to type with 2-4 fingers that moved in a frenzy.  I had the idea where a key was but I could not type without seeing the keyboard. But I was satisfied. (That’s what I used to think since I didn’t know what touch typing is).

Around a year back I came across the term touch typing (Almost the same time I was introduced to DGPLUG), and that reminded me of my fascination with the geeks who knew the art to type without looking at the keyboard.

I started practicing touch typing using gtypist. Daily. Diligently.
Initially, it was nothing but a pain in the fingers.  And I was typing slower than I could while looking at the keyboard.

But with weeks came accuracy. With months, came speed.

Few months later, I could touch type well.  And even after a long time has passed since I started with touch typing, my love for it only increased.  And to this date, I love typing and I’m still fascinated by crazy moving fingers on the keyboard. Except for this time, they are my own fingers!

During this time I also learned basic vim, which made qwerty my home.  My fingers never moved from the keyboard while writing or editing text. Like the hjkl keys were so comfortable that I wanted every application to have vim key-bindings. During this time I also tried spacemacs which was again a beauty.

 

That’s qwerty touch typing. How about Dvorak?

Around a month back, during a session, mbuf(one of our mentors at DGPLUG) suggested that one should be able to type at least 70 wpm.  I was shocked.  I knew he used Dvorak keyboard layout, but 70 wpm to me was more of an achievement than a lower limit. (btw, he upgraded the limit. And after a session recently, he said that 80 wpm should be minimum 😛 ).

Around the same time, while we all were having a typing race, one mentor said: “Wait till you see mbuf”, then I remembered the remark about 70 wpm and almost spontaneously asked what his (mbuf’s) typing speed was.
The reply was: “Always more than 80”.
I sat gaping at that number for a while. I was decent at qwerty, but from then on Dvorak had my attention.

 

Crux of a small research

Then for a day or two, I read about Dvorak and Qwerty.

Alphabetical typewriters had a problem in earlier times that while typing fast the rods attached to the keys would stick to each other. Qwerty dealt with this problem by spreading the keys so that clash was less often.
But this was in 1873!

Also, vowels that are present in almost all English words are not in the home row (except ‘a’).
Whereas in Dvorak all vowels are in the home row and in fact under your left five fingers!

TH: One of the most used key combinations used are under two adjacent fingers of your right hand, on the home row.

Also the keys that are used less are pushed at the positions that are a little harder to reach as compared to the home row keys.  Like v is pushed below the right hand ring finger.

The point here is that for Dvorak your fingers rest at the home row for longer as compared to the spaced out layout of qwerty.  You can type more effortlessly.

 

by storymode7 at August 19, 2018 10:38 AM

Ananya Maiti

Understanding python requests

In this post I am going to discuss the python-requests library. Python-requests is a powerful HTTP library that helps you make HTTP(s) requests very easily by writing minimal amount of code and also allows Basic HTTP Authentication out of the box. But before I write this post I want to describe the motivation behind me writing this post.

When it comes to writing software, libraries are a lifesaver. There is a library that addresses almost every problem you need to solve. That was the case for me as well. Whenever I used to face a specific problem I would look to see, if a library already existed. But I never tried to understand how they were implemented, the hard work that goes into building them, or the folks behind the libraries. Most of the libraries we use these days are open source and their source code is available somewhere. So we could, if we wished to, with a little hard work, understand the implementation.

During a related discussion with mbuf in #dgplug channel, he gave me a assignment to understand one of the libraries I have recently used and understand what data structures/algorithms are used. So I chose to look inside the source code of python-requests . Let’s begin by understanding how two nodes in a network actually communicate.

Socket Programming : The basis of all Networking Applications

Socket Programming is a way of connecting two nodes in a network and letting them communicate with each other. Usually, one node acts a server and other as a client. The server node listens to a port for an IP, while the client reaches out to make a connection. The combination of port and an IP is called a socket. The listener socket in the server listens to request from the client.

This is the basis of all Web Browsing that happens on the Internet. Let us see how a basic client-server socket program looks like

 

 

 

 

As you can see a server binds to a port where it listens to any incoming request. In our case it is listening to all network interfaces 0.0.0.0 (which is represented by an empty string) at a random port 12345. For a HTTP Server the default port is 80. The server accepts any incoming request from a client and then sends a response and closes the connection.

When a client wants to connect to a server it connects to the port the server is listening on, and sends in the request. In this case we send the request to 127.0.0.1 which is the IP of the local computer known as localhost.

This is how any client-server communication would look like. But there is obviously lot more to it. There will be more than one request coming to a server so we will need multi-threaded server to handle it. In this case I sent simple text. But there could be different types of data like images, files etc.

Most of the communication that happens over the web uses HTTP which is a protocol to handle exchange and transfer of hypertext i.e. the output of the web pages we visit. Then there is HTTPS which is the secure version of HTTP which encrypts the communication happening over the network using protocols like TLS.

Making HTTP Requests in Python

Handling HTTP/HTTPS requests in an application can be complex and so we have libraries in every programming language that make our life easier. In Python there are quite a few libraries that can be used for working with HTTP. The most basic is the http.client which is a cpython library. The http.client uses socket programs that is used to make the request. Here’s how we make a HTTP request using http.client

 

 

 

For making Requests that involve Authentication we have to use Authorization headers in the request header. We have used the base64 library here for generating a Base64 encoded Authorization String.

Using python-requests for making HTTP requests

The http.client library is a very basic library for making HTTP requests and its not used directly for making complex HTTP requests. Requests is a library that wraps around http.client and gives us a really friendly interface to handle all kinds of http(s) requests, simple or complex and takes care of lots of other nitty gritty, e.g., TLS security for HTTPS requests.

Requests heavily depends on urllib3 library which in turn uses the http.client library. This sample shows how requests is used for making HTTP requests

 

You can see making requests is much simpler using requests module. Also it gracefully handles which protocol to use by parsing the URL of the request

Let us now go over the implementation

Inspecting requests

The requests api contains method names similar to the type of request. So there is get, post, put, patch, delete, head methods.

Given below is a rough UML class diagram of the most important classes of the requests library

When we make a request using the request api the following things happen

1. Call to Session.request() method

Whenever we make a request using the requests api it calls a requests.request() method which in turn Calls the Session.request() method by creating a new session object. The request() method then creates a Request object and then prepares to make a request.

2. Create a PreparedRequest object

The request() method creates a PreparedRequest object using the Request object and prepares it for request

3. Prepare for the Request

The PreparedRequest object then makes a call to the prepare() method to prepare for the request. The prepare method makes a call to prepare_method(), prepare_url(), prepare_headers(), prepare_cookies(), prepare_body(), prepare_auth(), and prepare_hooks() methods. These methods does some pre-processing on the various request parameters

4. Send the Request

The Session object then calls the send() method to send the request. The send() method then gets the HTTPAdapter object which makes the request

5. Get the Response

The HTTPAdapter makes a call to its send() method which gets a connection object using get_connection() which then sends the request. It then gets the Response object using the request object and the httplib response from httplib library (httplib is the python2 version of http.client)

And now onwards, How does a request actually get sent and how do we get a httplib response ?

Enter the urllib3 module

The urllib3 module is used internally by requests to send the HTTP request. When the control comes to the HTTPAdapter.send() method the following things happen

1. Get the Connection object

The HTTPAdapter gets the connection object using the get_connection() method. It returns a urllib3.ConnectionPool object. The ConnectionPool object actually makes the request.

2. Check if the request is chunked and make the request

The request is checked to see if it’s chunked or not. If it is not chunked a call to urlopen() method of ConnectionPool object is made. The urlopen() method makes the lowest level call to make the request using the httplib(http.client in python3) library. So it takes in a lot of arguments from the PreparedRequest object.

If the request is chunked a new connection object is created, this time, the HTTPConnection object of httplib. The connection object will be used to send the request body in chunks using the HTTPConnection.send() method which uses socket program to send the request.

3. Get the httplib response

The httplib response is generated using the urlopen() method if the request is not chunked and if the request is chunked it is generated using the getresponse() method of httplib. Httplib then uses socket program to get the response.

And there you have it! The most important parts of the requests workflow. There is a lot more that you can know by reading the code further.

Libraries make the life of a developer simpler by solving a specific problem and making the code shareable and widespread. There’s also a lot of hard work involved in maintaining the library. So in case you are a regular user of a library do consider reading the source code if its available and contributing to it if possible.

Thanks to kennethreitz and the requests community for making our life easier with requests!

References

  1. https://www.geeksforgeeks.org/socket-programming-python/
  2. https://docs.python.org/2/howto/sockets.html
  3. https://en.wikipedia.org/wiki/HTTPS
  4. https://docs.python.org/3/library/http.client.html
  5. https://github.com/requests/requests
  6. https://github.com/urllib3/urllib3
  7. https://tutorialspoint.com/uml/uml_class_diagram.htm

Also many Thanks to #dgplug friends for helping me improving this post.

by ananyomaiti at August 19, 2018 07:23 AM

Kumar Vipin Yadav (kvy)

2-D Array and 2-D strings in C

2D Array and 2D strings

A 1-D array is a collection of several elements that contain only one row of elements.
The 2-D array is a collection of rows and columns where each row contains smiler number of columns.
The requirement of 2-D array is removing the need of several 1-d array and it may be helpful to create
mathematical data structure named as matrix.

Initializing 2-D Array

1.Screenshot from 2018-08-16 11-35-29
2.Screenshot from 2018-08-16 17-54-17

Taking input in a 2-D Array :-

int main()
{
    int a[3][2];
    int i,j;

    for( i = 0 ; i < 3 ; i++ )
    {
        for( j = 0 ; j < 2 ; j++ )
        {
            printf("Enter [%d][%d] element of array : ",i,j);
            scanf("%d",&a[i][j]);
        }
    }
}

Output:-

Enter [0][0] element of array : 1
Enter [0][1] element of array : 2
Enter [1][0] element of array : 3
Enter [1][1] element of array : 4
Enter [2][0] element of array : 5
Enter [2][1] element of array : 6

Taking input and printing a 2-D Array :-

int main()
{
    int a[3][2];
    int i,j;

    for( i = 0 ; i < 3 ; i++ )
    {
        for( j = 0 ; j < 2 ; j++ )
        {
            printf("Enter [%d][%d] element of array : ",i,j);
            scanf("%d",&a[i][j]);
        }
    }

    for( i = 0 ; i < 3 ; i++ )
    {
        for( j = 0 ; j < 2 ; j++ )
        {
            printf("%4d",a[i][j]);
        }
        printf("\n");
    }
}

Output :-

Enter [0][0] element of array : 1
Enter [0][1] element of array : 2
Enter [1][0] element of array : 3
Enter [1][1] element of array : 4
Enter [2][0] element of array : 5
Enter [2][1] element of array : 6
   1   2
   3   4
   5   6

Adding, Subtraction and multiplication of matrix :-

int main()
{
    int a[3][3];
    int b[3][3];
    int c[3][3];

    void input( int [][3] );

    void output( int [][3] );

    void sum( int [][3] , int [][3] , int [][3] );

    void subtraction( int [][3] , int [][3] ,int [][3] );

    void multiply( int [][3] , int [][3], int [][3] );

    printf("Taking input in first Array : \n");
    input(a);

    printf("Value in a : \n");
    output(a);

    printf("Taking input in first Array : \n");
    input(b);

    printf("\nValue in b : \n");
    output(b);

    printf("Adding a and b :\n");
    printf("\nvalue of sum of a and b:\n");
    sum(a,b,c);
    output(c);

    printf("Subtracting a and b :\n");
    printf("\nvalue of subtraction of a and b:\n");
    subtraction(a,b,c);
    output(c);

    printf("multiplying a and b :\n");
    printf("value of multiplication of a and b :\n");
    multiply(a,b,c);
    output(c);
}

void input( int a[][3] )
{
    int i ,j;

    for( i = 0 ; i < 3 ; i++ )
    {
        for( j = 0 ; j < 3 ; j++ )
        {
            printf("Enter [%d][%d] element of array : ",i,j);
            scanf("%d",&a[i][j]);
        }
    }
}

void output( int a[][3] )
{
    int i,j;
    for( i = 0 ; i < 3 ; i++ )
    {
        for( j = 0 ; j < 3 ; j++ )
        {
            printf("%4d",a[i][j]);
        }
        printf("\n");
    }
}

void sum( int x[][3] , int y[][3], int z[][3] )
{
    int i,j;

    for( i = 0 ; i < 3 ; i++ )
    {
        for( j = 0 ; j < 3 ; j++ )
        {
            z[i][j] =  x[i][j] + y[i][j];       
        }
    }
}

void subtraction( int x[][3] , int y[][3], int z[][3] )
{
    int i,j;

    for( i = 0 ; i < 3 ; i++ )
    {
        for( j = 0 ; j < 3 ; j++ )
        {
            z[i][j] =  x[i][j] - y[i][j];       
        }
    }
}

void multiply( int x[][3] , int y[][3], int z[][3] )
{
    int i,j,k;

    for( i = 0 ; i < 3 ; i++ )
    {
        for( j = 0 ; j < 3 ; j++ )
        {
            z[i][j] = 0 ;
            for( k = 0 ; k < 3 ; k++ )
                z[i][j] = z[i][j] + ( x[i][k] * y[k][j] );
        }
    }
}

Output:-

Taking input in first Array : 
Enter [0][0] element of array : 1
Enter [0][1] element of array : 2
Enter [0][2] element of array : 3
Enter [1][0] element of array : 4
Enter [1][1] element of array : 5
Enter [1][2] element of array : 6
Enter [2][0] element of array : 7
Enter [2][1] element of array : 8
Enter [2][2] element of array : 9
Value in a : 
   1   2   3
   4   5   6
   7   8   9
Taking input in first Array : 
Enter [0][0] element of array : 1
Enter [0][1] element of array : 2
Enter [0][2] element of array : 3
Enter [1][0] element of array : 4
Enter [1][1] element of array : 5
Enter [1][2] element of array : 6
Enter [2][0] element of array : 7
Enter [2][1] element of array : 8
Enter [2][2] element of array : 9

Value in b : 
   1   2   3
   4   5   6
   7   8   9
Adding a and b :

value of sum of a and b:
   2   4   6
   8  10  12
  14  16  18
Subtracting a and b :

value of subtraction of a and b:
   0   0   0
   0   0   0
   0   0   0
multiplying a and b :
value of multiplication of a and b :
  30  36  42
  66  81  96
 102 126 150

2-D String :

As we know , A 1-D string can hold only one string at a time. To store more then one string,
we can create 2-D string.

Initializing of 2-D string :

screenshot-from-2018-08-19-00-21-19.png

Input with 2-D String :

In this program we will take input in 2-D String.

int main()
{
    char a[5][10];
    int i;

    for( i = 0 ; i < 5 ; i++ )
    {
        printf("Enter a string : ");
        scanf("%s",a[i]);
    }
    return 0;
}

output:

Enter a string : vipin
Enter a string : nitin
Enter a string : bhaskar
Enter a string : yatender
Enter a string : harchand

Output with 2-D String :

In this program we will print a 2-D string after taking input.

int main()
{
    char a[5][10];
    int i;

    for( i = 0 ; i < 5 ; i++ )
    {
        printf("Enter a string : ");
        scanf("%s",a[i]);
    }

    for( i = 0 ; i < 5 ; i++ )
    {
        puts(a[i]);
    }
    return 0;
}
Enter a string : Vipin
Enter a string : Nitin     
Enter a string : Bhaskar
Enter a string : Yatender
Enter a string : Harchand
Vipin
Nitin
Bhaskar
Yatender
Harchand

Now we will code a program who print all name start from v or V,
from a 2-D string :

int main()
{
    char a[10][10];
    int i;

    for( i = 0 ; i < 10 ; i++ )
    {
        printf("Enter a string : ");
        scanf("%s",a[i]);
    }

    for( i = 0 ; i < 10 ; i++ )
    {
        if ( a[i][0] == 'v' || a[i][0] == 'V')
            puts(a[i]);
    }
    return 0;
}

Output:-

Enter a string : vipin
Enter a string : nitin
Enter a string : bhaskar
Enter a string : harchand
Enter a string : yatender
Enter a string : vijay 
Enter a string : ajay
Enter a string : ram
Enter a string : Vinod
Enter a string : prateek
vipin
vijay
Vinod

This code will find given String from a bunch of String :

// Use string.h library
int main()
{
    char a[10][10];
    char b[10];
    int i;

    for( i = 0 ; i < 10 ; i++ )
    {
        printf("Enter a string : ");
        scanf("%s",a[i]);
    }

    printf("Enter a string you want to search : "); 
    scanf("%s",b);

    for( i = 0 ; i < 10 ; i++ )
    {
        if ( strcmp( a[i] , b ) == 0)
            printf("String is at a[%d]",i);
    }
    return 0;
}

Output:-

Enter a string : Vipin
Enter a string : nitin
Enter a string : yatender
Enter a string : bhaskar
Enter a string : ajay
Enter a string : vijay
Enter a string : harchand
Enter a string : ned           
Enter a string : caption
Enter a string : rajat
Enter a string you want to search : vijay
String is at a[5]

We can have more multi dimensional array but we use them according to our needs.
In our next blog-post we will read structures 🙂

by kumar vipin yadav at August 19, 2018 06:54 AM

Siddhant N Trivedi (sidntrivedi)

Ideathon at IIIT-D

Hey everyone,I along with 3 other members participated in a Hackathon or basically an Ideathon that took place at Indraprastha Institute of Information Technology (IIIT), Delhi on 17th and 18th August, 2018.

It was a 24 hour hackathon in which we had to propose an idea and then get its business model, revenue model and market research ready. Then,we had to approach several mentors with our idea and they would provide us with funding if they found the idea to be innovative and unique.

Since, the event was sponsored by IBM, so there was an official of IBM who told us about the CallforCode.org campaign of IBM through which it has made various ML,AI,IoT,Watson APIs of worth approx 60lakhs open for us for free. All we have to do is use them through drag and drop and make RestAPI calls to them for any functionality.

Thus, we brainstormed for nearly 12 hours continuously and at the end, reached at a product which helped us get a 7Lakh funding(not in real). Though,the winner got 102 lakhs of funding which stunned us. Their idea must have been really a horrible one.

But, to conclude, it was an awesome experience.

Thanks IIITD for such a well organized and coordinated event.

by sidntrivedi012 at August 19, 2018 02:42 AM

August 18, 2018

Akshay Gaikwad (akshayg96)

Type hinting and Mypy

Python is dynamic typed language, but static type variable can be possible if we use type annotations. Python 3.5 and later versions have functionality of type hinting. Though It is completely optional. Type hinting is useful for testing purpose in CI. It does not change code to static variable; but...

August 18, 2018 07:00 AM

Rahul Jha (RJ722)

Do we really need to cover coverage with Vulture?

coverage - wow, so accurate - we need it…?

When this phase kicked in, I was still wrapping my head around coverage - My plan was to get coverage integrated with Vulture, which would allow users to “transfer” the results from coverage to Vulture so that the false positives were automatically detected and thereby supressed. It sounded so neat and moreso doable (using an interminnent xml file) and so naturally, I just quickly got down to nuts and bolts and started a Pull Request. But, alongst those splendid colors of awesome functionality, Jendrik comes in and talks a bunch on why we shouldn’t do it. I could extrapolate the following reasons:

  • We already created an easier, dynamic and robust way to create and manage whitelists (--make-whitelist) which shall eliminate the need of having 10 different things for dealing with false positives.
  • Coverage is a tool for dynamic analysis (which requires your code to be actually run) and is therefore slow, but gives much more accurate results. And, if we already have results from coverage, why would we then need Vulture for??
  • Vulture is supposed to be a static analysis tool.
  • Vulture would’ve no longer been independent of external modules.

But, still it struck me as a little odd at that time because I thought that the functionality was optional and if someone didn’t want it, he would simply just not use it - simple. By now, may be you’ve judged that that this was the “feature syndrome” talking (The more features we have, the more usable we are) and yes, you’re right. Luckily, Jendrik foresaw this early and redirected me towards http://neugierig.org/software/blog/2018/07/options which explains why it’s actually toxic for anything to have more “options” and how it was an expensive process in terms of time spent on writing, documenting and maintaining it.

I’m very thankful of Jendrik and proud of the fact that we’ve still managed to keep the workflow involved when using Vulture as simple as it could get. :-)

August 18, 2018 12:00 AM

August 15, 2018

Anu Kumari Gupta (ann)

split() v/s rsplit() & partition() v/s rpartition()

split(), rsplit(), partition(), rpartition() are the functions on strings that are used in Python. Sometimes there are confusion between these. If you feel the same, then I assure you, it will be no longer confusing to you.

Understanding split()

So, what does split() do? As the name suggests, split() splits the given string into parts. split() takes up two arguments – one is the delimiter string(i.e., the token which you wish to use for seperating or splitting into words). The other is the maxsplit value, that is, the maximum split that you wish to have. By default, split() takes up space as delimiter string. The resultant is the list of splited words.

Here is how you use it:

By passing no arguments,

>> s = "Hello people, How are you?"
>>> s.split()
['Hello', 'people,', 'How', 'are', 'you?']

By passing argument with just the delimiter,

>> s = "Hello people, How are you?"
>>> s.split(",")
['Hello people', ' How are you?']

By passing argument with the delimiter and the maxsplit (say 1, which means to allow only one split),

>>> s = "Hello people, How are you?"
>>> s.split('H', 1)
['', 'ello people, How are you?']

If you try passing any number against the maxsplit that is above the maximum splits possible, then it will always return the list of maximum possible seperated words.

Understanding rsplit()

You might have a question – When split() splits the string, why at all we need rsplit() and what is it? The answer is rsplit() does nothing extra than splitting a given string, except of the fact that it starts splitting from the right side. It parses the string from the right side.

Here is how you use it:

By passing no arguments,

>> s = "Hello people, How are you?"
>>> s.split()
['Hello', 'people,', 'How', 'are', 'you?']

By passing argument with just the delimiter,

>>> s = "Hello people, How are you?"
>>> s.split(",")
['Hello people', ' How are you?']

Note- the output remains the same when we don’t pass any arguments or when we just provide the delimiter.

However, if we pass the arguments with maxsplit as below, you will see the difference:

>> s = "Hello people, How are you?"
>>> s.rsplit('H', 1)
['Hello people, ', 'ow are you?']

Observe, now the split took place on the right occurrence of delimiter.

Understanding partition()

Understood split() and rsplit(). But what is partiton()? The answer is – partition() just splits the string into two parts, given the delimiter. It splits exactly into two parts (left part and right part of the specified delimiter). The output returns a tuple of the left part, the delimiter, and the right part.

Here is how you use it:

>> s = "I love Python because it is fun"
>>> s.partition("love")
('I ', 'love', ' Python because it is fun')

Note: There is no default argument. You have to pass an argument mandatorily otherwise it throws an error.

Understanding rpartition()

It should be intuitive to use, by know, the working of rpartition(). rpartition() just like rsplit() does the partition from right side. It parses the string from the right side and when a delimiter is found, it partition the string into two parts and give back the tuple to you as is the case for partition().

Here is how you use it:

>> s = "Imagining a sentence is so difficult, isn't it?"
>>> s.rpartition("is")
('Imagining a sentence is so difficult, ', 'is', "n't it?")

Notice the last occurrence of “is” in the above given string.

 

 

Hope this helps in understanding the working of these functions!

Happy Coding.

by anuGupta at August 15, 2018 07:37 PM

Kumar Vipin Yadav (kvy)

Pointers In C

Pointers in C

1. pointer is a user define data type.

2. A pointer variable stores address of another variable or NULL because pointer stores address,
of another variable so that with the help of pointer variable we can process the variable.

3. Run time memory allocation scheme can be applied only with pointers in c.

4. As we know in c language we ca no access variable of a function into another function but this can
be possible with help of pointers.

5. To create a pointer we can use * ( Re-direction operator )

syntax ::

DATATYPE     *ptr_name;

e.g.

   int           *p;
   char          *q;
   float         *r;

The Data type of pointer specify that a pointer can store address of a particular data type variable.

using pointer with different datatype :

int A = 12;        char B = 'X';       float C = 3.14;       double D = 21.22;
int *P;            char *Q;            float *R;             double *S;
P = &A;            Q = &B;             R = &C;               S = &D;

—> & operator is known as “address of” operator.

size of pointer :
pointer memory size is size of integer. because a pointer stores memory address, and memory address,
is always integer.

int A;             char B;      float C;       double D ;
2 byte             1 byte       4 byte         8 byte

int *P;            char *Q;     float *R;      double *S;
2 byte             2 byte       2 byte         2 byte

Let’s understand pointer with help of some examples :-

1.
Example
2.
Example
3.
Example

Hope you will got pointers.

Let’s solve some examples 🙂 :-

Now we will learn how to use pointer with simple asthmatics, strings, array and functions.

Pointers with simple asthmatics :-

1. A simple multiply program :-

int main()
{
    int A;
    int B;
    int C;
    int *p;
    int *q;
    int *r;

    A = 10;
    B = 20;

    p = &A;
    q = &B;
    r = &C;

    *r  = *p * *q ;

    printf("Multiply of A and B is %d.",*r);

    return 0;
}

Output:-

Multiply of A and B is 200.

2. A program to calculate Simple Interest :-

This programe will ask you the Principle, Rate and Time and give you the simple interest,
but we use Pointers here.

int main()
{
    float P ;
    float R ;
    float T ;
    float S;

    float *p;
    float *q;
    float *r;
    float *SI;

    p =  &amp;P;
    q =  &amp;R;
    r =  &amp;T;
    SI = &amp;S;

    printf("Enter Principle : ");
    scanf("%f",p);   // Because <code>p</code> has address of <code>P</code>, // So we can write <code>p</code> instead of <code>&amp;P</code>. printf("Enter Rate : "); scanf("%f",q); printf("Enter Time : "); scanf("%f",r); *SI = (*p * *q * *r)/100; // or *SI = (*p**q**r)/100; printf("Simple Interest is %.2f%%.",*SI); return 0; } 

Output:-

Enter Principle : 10000
Enter Rate : 2.4
Enter Time : 1.5
Simple Interest is 360.00%.

Pointers with functions :-

Now we will use pointer with functions.

3. A simple sum program using pointer and function :-

Here in this problem we will create a function that will store sum of
x and y in sum using function.

int main()
{
    int x = 90;
    int y = 67;
    int sum;

    void sum_of_2_number( int * , int * , int * );
    // In above statement we are telling that we give Address of variable as argument,
    // And receive them in pointers.

    sum_of_2_number( &x, &y, &sum );
    // Giving Address of variables.

    printf("Sum of x and y is %d.",sum);

    return 0;
}

void sum_of_2_number( int *a , int *b , int *c )
// Here we are reserving addresses in a, b and c.
{
    *c = *a + *b;
}

Output:-

Sum of x and y is 157.

4. A function that can swap value of 2 integer variables :-

Here in this problem in which we will have to swap values of 2 integers but,
with help of functions.
This can only be done with help of pointers only.

int main()
{
    int x = 90;
    int y = 67;

    void swap( int * , int * );

    printf("Values of x and y before calling function : \nx = %d y = %d \n",x,y);

    swap( &x, &y );

    printf("Values of x and y after calling function : \nx = %d y = %d \n",x,y);

    return 0;
}

void swap( int *a , int *b )
{
    int temp;

    temp = *a;
    *a = *b;
    *b = temp;
}
Values of x and y before calling function :
x = 90 y = 67
Values of x and y after calling function :
x = 67 y = 90

5. A function to calculate Area of a circle :-

Here we program A function who calculate area of a circle using pointers.

int main()
{
    float r;
    float area;

    void Area_of_circle( float * , float * );

    printf("Enter Radius of Circle : ");
    scanf("%f",&r);

    Area_of_circle( &r, &area );

    printf("Area of circle is %.3f.",area);

    return 0;
}

void Area_of_circle( float *a , float *b )
{
    *b = 3.14* *a * *a;
}
Enter Radius of Circle : 9
Area of circle is 254.340.

6. A function to count number of digit in integer :-

Our this function will take a integer and count how many digits it has.

int main()
{
    int n, count = 0;

    void Count_digit( int * , int * );

    printf("Enter a integer : ");
    scanf("%d",&n);

    Count_digit( &n , &count );

    printf("Number of digit in a integer are %d.",count);

    return 0;
}

void Count_digit( int *a , int *b )
{
    for( ; *a != 0 ; *a = *a/10 )
        *b = *b + 1;
}

Output:-

Enter a integer : 999999
Number of digit in a Integer are 6.

POINTERS WITH ARRAY :-

When we create an array, Then the array name declared as pointer variables,
that contain address of first element. This pointer variable is constant in nature means we can,
not change it’s value.

Screenshot from 2018-08-14 00-54-42

Arithmetic with pointer :-

1. Addition an integer with pointer –>

We can add only an integer to a pointer which return an address.

Screenshot from 2018-08-14 01-39-24

2. Subtraction an integer with pointer –>

We can subtract only an integer to a pointer which return an address.
Screenshot from 2018-08-14 01-41-14

MULTIPLICATION, DIVISION AND REMAINDER OPERATION ARE NOT ALLOWED WITH POINTERS.

3. Subtraction of 2 pointers –>

We can not add 2 pointer with each other but we can subtract 2 pointers which return,no of element between addresses.

Screenshot from 2018-08-14 01-59-58

4. We can compare 2 pointers using relational operators –>

Screenshot from 2018-08-14 02-45-06

Let’s use pointers to print arrays :-

1. Here we print a array using pointer :

int main()
{
    int a[5] = {23,54,56,67,78};

    int *p , i;

    p = a; // or p = &a[0];

    for ( i = 0 ; i < 5 ; i++ )
        printf("%d\n",*(p+i));

    return 0;
}

//or

int main()
{
    int a[5] = {23,54,56,67,78};

    int i;

    for ( i = 0 ; i < 5 ; i++ )
        printf("%d\n",*(a+i));

    return 0;
}
// or

int main()
{
    int a[5] = {23,54,56,67,78};

    int *p;

    for ( p = a ; p < a+5 ; p++ )
        printf("%d\n",*p);

    return 0;
}

Output:-

23
54
56
67
78

2. Here we print a array in reverse using pointer :

int main()
{
    int a[5] = {23,54,56,67,78};

    int *p;

    for ( p = a+4 ; p >= a ; p-- )
        printf("%d\n",*p);

    return 0;
}
//or
int main()
{
    int a[5] = {23,54,56,67,78};

    int *p , i;

    for ( i = 4, p = a ; i >= 0 ; i-- )
        printf("%d\n",*(p+i));

    return 0;
}
// or

int main()
{
    int a[5] = {23,54,56,67,78};

    int i;

    for ( i = 4 ; i >= 0 ; i-- )
        printf("%d\n",*(a+i));

    return 0;
}

Output:-

78
67
56
54
23

Pointers with String :

1. printing an string with help of pointers

int main()
{
    char a[] = "My name is vipin.";
    char *p;

    for ( p = a ; *p != 0 ; p++ )
        printf("%c",*p);

    return 0;
}

Output:-

My name is vipin.

 

by kumar vipin yadav at August 15, 2018 05:55 PM

August 12, 2018

Ratan Kulshreshtha

Bootstrap Your .gitignore

Many of us use git to version control our projects and we all can agree on one thing although that despite the benefits git provides, Git is hard: screwing up is easy, and figuring out how to fix your mistakes is really hard. And while working with git it is also important to tell git what files git should not remember thus not version control that, so .gitignore comes into the picture.

August 12, 2018 06:50 AM

August 10, 2018

Siddharth Sahoo

How to Learn to Code ?

In this blog we gonna talk about what should be the course of action to learn coding right from scratch . First of all If you feel you lack programming skill , don’t feel too bad.Try not to think that you are too late to learn how to code .So to begin with our very first step to learn programming is to choose a language you want to code in. First thing you have to do is find you interest area . This means you have to know which development you enjoy the most. It can be Web Development, application development, data science etc. Then you have make a domain language. This means you have to choose some languages you enjoy the most. You can know your favorable language by trying some. There are many languages you can go for like python,JAVA ,C ,C++ etc.It is advisable to start coding from either python or java as theses languages are easy to work on compared to C and C++.

[Youtube] Learn How to code to become an expert programmer.

If you don’t mind spending a few bucks, join some renowned coding classes or Coding Boot-camps which teaches concepts right from the scratch.Stay away from institutes who aim for making money rather than imparting knowledge. If you want to learn by yourself at home, Follow for the following fundamental steps: First of all learn all to code and understand basic programming logic.Start writing basic programs like whether a number is prime or not , palindrome , Armstrong etc.. After having a fair idea of such basic programs , start with the very first data structure i.e Array.. Try solving problems related to the concept of arrays. And when you are confident enough to solve all problems related to one dimensional arrays. Start with multi dimensional arrays. It would be easy to proceed with multi dimensional arrays if you have practiced arrays well. Now comes the most important part of coding , i.e. recursion. Recursion makes your code small and easier to understand. Try converting iterative codes to recursive code. This will help you have idea on how recursion works. View tutorials online to understand it better. After recursion, comes the part of understanding complexity of the program you write. Complexity helps you understand how efficient your code is going to be. And hence, makes you a good programmer. Next, learn how to allocate your memory dynamically . This is very important for you to learn as it helps you code efficiently. After all these basics , comes the most important part of coding – DATA STRUCTURE . Begin with linked list and move on to other data structures like stacks , queues , trees , binary trees , heaps and etc. And now, final piece of advice: There are several things you can do to improve your coding skills. Code as much as you can. Make it your hobby. Your day should not pass without coding.Make projects. Think of something and create it from scratch. Believe me, this helps a lot.Participate in Open source projects. It will also help you in reading others’ code.

If you are having a hard time with problems, do competitive programming. It will be awful in the start and you’ll get frustrated but if you stick to it long enough and work hard enough, you’ll become a great programmer .

Don’t learn to learn. Learn to build.

A important thing we need to understand is Knowing a language and knowing how to code are two completely different things. Knowing the syntax of various languages is not going to work.One needs to work on the code logic and algorithms and it is always advisable to code in a single particular language.Try not to learn every language out there.

Stick to one and master it.

Thank you for reading ! Happy Programming!

by CodedWorm at August 10, 2018 05:12 PM

Kumar Vipin Yadav (kvy)

FUNCTION WITH STRINGS IN C

Functions with Strings :-

We have a studied about inbuilt function of strings now we will create our own functions or,
we will learn function with strings.

1. Copy function:-

copy( A, B )
Our this function will copy A into B it work as strcpy() but It is coded by you :).

e.g.

#include<stdio.h>
int main()
{
    char A[] = "Vipin is my name";
    char B[20] = "Empty :)";

    void copy( char [] , char [] );
    printf("Values before changes : \n\n");
    puts(A);
    puts(B);

    copy(A,B);

    printf("\nValues after changes : \n\n");
    puts(A);
    puts(B);

    return 0;
}
void copy( char x[] , char y[])
{
    int i;

    for( i = 0 ; x[i] != 0 ; i++ )
    // Here 0 condition represent NULL
    {
        y[i] = x[i];
    }
    y[i] = 0;
}

Output:-

Values before changes : 

Vipin is my name
Empty :)

Values after changes : 

Vipin is my name
Vipin is my name

2. Length function:-

length( A )
Our this function will accept a String and find it’s length and return it.

e.g.

#include<stdio.h>
int main()
{
    char A[] = "Vipin is my name";
    int length_of_String;

    int length( char [] );
    
    length_of_String = length( A );

    printf("Length of A is : %d.\n",length_of_String);

    return 0;
}
int length( char x[] )
{
    int i;

    for( i = 0 ; x[i] != 0 ; i++ );
    // This ';' will show that we are not writting anything in for loop     
    return --i;
}

Output:-

Length of A is : 15.

3. Counting vowels function:-

count_vowels( A )
Our this function will count vowels in a string.

e.g.

#include<stdio.h>
int main()
{
    char A[] = "Vipin is my name";
    int number_or_vowels;

    int count_vowel( char [] );
    
    number_or_vowels = count_vowel( A );

    printf("Numbers of vowels in A is : %d.\n",number_or_vowels);

    return 0;
}
int count_vowel( char x[] )
{
    int i;
    int count;

    for( i = 0, count = 0 ; x[i] != 0 ; i++ )
    // Here 0 condition repentant NULL
    {
        if ( x[i] == 'a' || x[i] == 'e' || x[i] == 'i' || x[i] == 'o' || x[i] == 'u' )
            count++;
    }
    
    return count;
}

Output:-

Numbers of vowels in A is : 5.

4. Concatenating Strings function:-

concatenate_strings( A , B )
Our this function will concatenate A and B mean It append B with A,
It is same as strcat() but here we code this function.

e.g.

#include<stdio.h>
int main()
{
    char A[50] = "my name is ";
    char B[]   = " Kumar Vipin Yadav";
 
    void concatenate_strings( char [] , char[] );
    
    printf("Value of A before calling our function : \n\n");
    puts( A );

    concatenate_strings( A , B );

    printf("Value of A after calling our function : \n\n");
    puts( A );

    return 0;
}
void concatenate_strings( char x[] ,char y[] )
{
    int i;
    int len;

    for( i = 0 ; x[i] != 0 ; i++ );

    len = --i;

    for ( i = 0 ; y[i] != 0 ; i++ )
        x[len+i] = y[i];

    x[len+i] = 0;
    // here we add NULL in last of array A
}

Output:-

Value of A before calling our function : 

my name is 
Value of A after calling our function : 

my name is Kumar Vipin Yadav

5. Conciliating Words function:-

counting_words( A , B )
Our this function will take a string and count that how many words It has.

e.g.

#include<stdio.h>
int main()
{
    char A[50] = "my name is Vipin";
    int words;
 
    int counting_words( char[] );
    
    words = counting_words( A );

    printf("Number of word in our String is %d.\n",words );

    return 0;
}
int counting_words( char y[] )
{
    int i;
    int count;

    for ( i = 0, count = 0 ; y[i] != 0 ; i++ )
    {
        if ( y[i] == ' ' )
            count++;
    }
    return ++count;
}

Output:-

Number of word in our String is 4.

6. Counting alphabets, spaces, digits and symbols function:-

print_alphabets_spaces_digits_symbols( A )
Our this function will take a string and count that how many alphabets It has,
how many digits it has, how many symbols it has and how many spaces it has
and then print all information.
WE CAN EVEN RETURN ABOVE INFORMATION USING A ARRAY, IT’S YOUR HOME TASK HAVE A FUN  🙂

e.g.

#include<stdio.h>
int main()
{
    char A[50] = "my name is Vipin";
 
    void print_alphabets_spaces_digits_symbols( char [] );
    
    print_alphabets_spaces_digits_symbols( A );

    return 0;
}
void print_alphabets_spaces_digits_symbols( char y[] )
{
    int i;
    int space,alphabets,digit,symbols;

    space = 0 , alphabets = 0 , digit = 0 , symbols = 0 ;

    for ( i = 0 ; y[i] != 0 ; i++ )
    {
        if ( y[i] == ' ' )
            space++;
        else if ( y[i] >= 'a' && y[i] <= 'z' || y[i] >= 'A' && y[i] <= 'Z' )             alphabets++;         else if ( y[i] >= '0' && y[i] <= '9' )
            digit++;
        
        else
            symbols++;
        
    }
    printf("Alphabets = %d, Digit = %d, Space = %d and Symbols = %d.",alphabets,digit,space,symbols);
}

Output:-

Alphabets = 13, Digit = 0, Space = 3 and Symbols = 0.

7. Converting to lower case function:-

lower( A )
Our this function will take a string and convert it into lower case.

e.g.

#include<stdio.h>
int main()
{
    char A[50] = "My Name Is VIPIN";
 
    void lower( char [] );
    
    lower( A );

    puts( A );

    return 0;
}
void lower( char y[] )
{
    int i;

    for ( i = 0 ; y[i] != 0 ; i++ )
    {
        if ( y[i] >= 'a' && y[i] <= 'z' || y[i] == ' ')
            continue;
        else
            y[i] = y[i] + 32 ;
            // because ASCII value of upper case alphabets are 32 less then uppercase.
    }
}

Output:-

my name is vipin

8. Converting to upper case function:-

upper( A )
Our this function will take a string and convert it into upper case.

e.g.

#include<stdio.h>
int main()
{
    char A[50] = "My Name Is VIPIN";
 
    void upper( char [] );
    
    upper( A );

    puts( A );

    return 0;
}
void upper( char y[] )
{
    int i;

    for ( i = 0 ; y[i] != 0 ; i++ )
    {
        if ( y[i] >= 'A' && y[i] <= 'Z' || y[i] == ' ')
            continue;
        else
            y[i] = y[i] - 32 ;
            // because ASCII value of lower case alphabets are 32 greater then uppercase.
    }
}

Output:-

MY NAME IS VIPIN

9. Comparing function:-

compare( A, B )
Our this function will take 2 string and compare each other.
and return +ve value if A is greater, -ve value if B is greater and 0 if both are equal.

e.g.

#include<stdio.h>
int main()
{
    char A[] = "My Name Is VIPIN";
    char B[] = "My Name Is VIPIN";
    int res;
    int compare( char [] , char [] );
    
    res = compare( A , B );

    if ( res == 0 )
        printf("Both strings are equal.");
    else if ( res > 0 )
        printf("Strings A is greater.");
    else
        printf("Strings B is greater.");

    return 0;
}
int compare( char x[] , char y[])
{
    int i;

    for ( i = 0 ; x[i] != 0 || y[i] != 0 ; i++ )
    {
        if ( x[i] != y[i] )
            return x[i]-y[i];
    }
    return 0;
}

Output:-

Both strings are equal.

10. reversing function in another string:-

revers_in_another( A, B )
Our this function will take 2 string and revers A in B.

e.g.

#include<stdio.h>
int main()
{
    char A[] = "My Name Is VIPIN";
    char B[50];

    void revers_in_another( char [] , char [] );
    
    revers_in_another( A , B );

    puts(A);

    puts(B);

    return 0;
}
void revers_in_another( char x[] , char y[])
{
    int i,j;

    for ( i = 0 ; x[i] != 0 ; i++ );
     
    i -= 1; // because we are not taking NULL which was in last

    for ( j = 0 ; j <= i ; j++ )
        y[j] = x[i-j];

    y[j] = 0;

}

Output:-

My Name Is VIPIN
NIPIV sI emaN yM

11. reversing string:-

revers_in_itself( A )
Our this function will take a string and revers it within it.

e.g.

#include<stdio.h>
int main()
{
    char A[] = "My Name Is VIPIN";
    char B[50];

    void revers_in_itself( char [] );
    
    printf("A before calling function. \n\n");
    puts(A);

    revers_in_itself( A );

    printf("A after calling function. \n\n");
    puts(A);
 
    return 0;
}
void revers_in_itself( char x[])
{
    int i,j,temp;

    for ( i = 0 ; x[i] != 0 ; i++ );

    i -= 1;

    for( j = 0 ; j <= i ; j++, i-- )
    {
        temp = x[j];
        x[j] = x[i];
        x[i] = temp;
    }

}

Output:-

A before calling function. 

My Name Is VIPIN
A after calling function. 

NIPIV sI emaN yM    

12. palindromes function :-

palindrome( A )
Our this function will take a string and return 1 if they are palandrom and,
return 0 if they are not palindrome.

e.g.

#include<stdio.h>
int main()
{
    char A[] = "VIPIN";
    int res;

    int palindrome( char [] );

    res = palindrome( A );

    if ( res == 1 )
        printf("Yes String is palindrome.\n");
    else
        printf("No String is not palindrome.\n");
 
    return 0;
}
int palindrome( char x[])
{
    int i,j;

    for ( i = 0 ; x[i] != 0 ; i++ );

    i -= 1;

    for( j = 0 ; j <= i ; j++, i-- )
    {
        if ( x[i] != x[j] )
            return 0;
    }

    return 1;

}

Output:-

No String is not palindrome.

12. abbreviation function :-

abbreviation( A )
Our this function will take a string and print it’s abbrivation.

e.g.

#include<stdio.h>
int main()
{
    char A[] = "Mohan Das Karam Chand Gandhi";

    void abbreviation( char [] );

    abbreviation( A );
 
    return 0;
}
void abbreviation( char x[] )
{
    int i,j;

    i = 0 , j = 0 ;
    do
    {
        if ( x[i] == ' ' )
        {
            printf("%c. ", x[j]);

            j = i+1;
        }
        
        i++;

    }while( x[i] != 0 );
    for ( ; x[j] != 0 ; j++ )
        printf("%c",x[j]);
}

Output :-

M. D. K. C. Gandhi

13. remove vowels function :-

remove_vowels( A )
Our this function will take a string and remove all vowels from it.

e.g.

#include<stdio.h>
int main()
{
    char A[] = "My Name Is Vipin";

    void remove_vowels( char [] );

    printf("A before calling function\n");
    puts(A);

    remove_vowels( A );
 
    printf("\nA after calling function\n");
    puts(A);

    return 0;
}
void remove_vowels( char x[] )
{
    int i,j;

    for ( i = 0 ; x[i] != 0 ; i++ )
    {
        if( x[i] == 'a' || x[i] == 'e' || x[i] == 'i' || x[i] == 'o' || x[i] == 'u' )
        {
            for ( j = i ; x[j] != 0 ; j++ )
                x[j] = x[j+1];
            
            i--;
        }
    }
}

Output:-

A before calling function
My Name Is Vipin

A after calling function
My Nm Is Vpn

In our next blog we will read about pointers 🙂 .

by kumar vipin yadav at August 10, 2018 12:58 PM

August 08, 2018

Vaibhav Kaushik (homuncculus)

First Golang Meetup

Saturday 21-July-2018, was the first meetup of Gurgaon Golang meetup and also my first Golang meetup. Organised at Grofers Headquater, Gurgaon for all the gophers in Delhi-NCR. Group Photo Everybody was already half way through the Q/A session when I reached (I was late). Kasisnu was giving a demo of an exercise from the Go book (TCP server that periodically writes the time). You can read the source code of the Exercise from here.

by Vaibhav Kaushik (vaibhavkaushik@disroot.org) at August 08, 2018 12:18 PM

August 07, 2018

Anu Kumari Gupta (ann)

Dictionary using CLI

If you write posts like me, I am sure, quite often you must have referred a dictionary. Even if you didn’t require it while writing posts or you don’t happen to blog, dictionary is something unavoidable and it should be readily available to you because at any point of time unknowingly, it may be required to you.

You may have different ways of using dictionaries in your life. By different ways of using  dictionaries I mean, you might have used it online, or sometimes while typing you may have a tendency to check your dictionary apps on phone, etc. That can be ineffective at times, because it may take your time in finding exactly what you need and breaks your concentration. But here am I, with some amazing handy commands, taken from this amazing book: The Linux Cookbook by Michael stutz, that you can use to refer Dictionary and it won’t take much of your time and efforts to feel comfortable with it. Let me show you how you can use dictionary through Command Line Interface.

If you are using GNU/Linux, you have a list of words already in your system. You can find it by using: whereis dict. You will get the location of the source.  It should be either /usr/dict or /usr/share/dict.  The traditional Unix-style dictionary is the words sorted in ascending order, a long list albeit! The newer type dictionary contains the headwords as well as their definitions. To use the latter, you need to install WordNet – “a lexical database containing Nouns, verbs, adjectives and adverbs, grouped into sets of cognitive synonyms (synsets), each expressing a distinct concept”. Note: There are other databases other than wordnet as well.

Let us look into what we can do with the System Dictionary:

  • Sometimes it so happens that you don’t remember the full word and if you remember the first part of it, you may wish to lookup a word that begin with the type of string. To lookup a word beginning with ‘civ'(say), you have the command : look civ. It will return the list of words that starts from that given string.
  • To list all the words containing the string (say, int), regardless of case type: grep -i int /usr/share/dict/word
  • To list all the words that end with a string (say, ed) use: grep ed$ /usr/share/dict/word

Using Wordnet, you can find the meaning of the words as well. To install in your system, it is as simple as sudo apt install wordnet (I am using Ubuntu).

Let us have a look into what we can do with Wordnet:

  • To see all the available options for that particular word, you can use: wordnet your_word. your_word is the word which you want to search for.
  • To search the definition of a particular word (say, sad) along with some sentences as examples, use : wn sad -over or wordnet sad -over

Personally,  I like Wordnet because it’s just a terminal away from me. I don’t have to go anywhere to hunt for the exact meaning of a particular word. It allows me different options/forms of what I want from that word. For example, it shows me all the derived senses of the word ‘sad’, if I use wn sad -deria. The output is:

Screenshot from 2018-08-08 01-59-29

Isn’t it cool? And amazingly, it has different options for different words. -deria was option available for this word because a at the end denotes the adjective (the type of the word ‘sad’) and deri denotes the derived forms. If you try with some different word like fun, you will see different options for wn funas :

Screenshot from 2018-08-08 01-53-56.png

Note how the options for this word changed. We have hypernyms, hyponyms, synonyms, attributes, derived forms, domain, familiarity, coordinate terms, common words available for this word ‘fun’. Here, n at the end of the commands denotes the Noun form (the word fun being noun).

Spell is another one such amazing tool to spell check the file interactively. You can download it using sudo apt install spell. Say, you have a file named ‘file.txt’. To spell check the entire file, you just need to use: ispell file.txt. Amazingly, it corrects each wrong spelt word individually by providing a lot other substitutes and similar sounding words, along with the option to add it to your personal dictionary, replace it then and there, uncapitalize it, lookup for it, etc

Screenshot from 2018-08-08 02-54-05

Screenshot from 2018-08-08 02-54-26

Do you feel any difference between what you use as your dictionary and the one above i.e., using command line interface? I hope Yes.

I like playing around with Linux tools and commands because they have so much to offer according to my needs. The best part is that I can modify the commands according to my desire. Obviously, I don’t need to learn the commands. More than that, I like to share these petty tricks and information with you all, as I happen to come across them. Hope this article was helpful to you!

by anuGupta at August 07, 2018 09:45 PM

Bhavesh Gupta (BhaveshSGupta)

#readingeveryday 2

For #readingeveryday and link Today I dedicated 15 mins to read documentation or pym python book.

August 07, 2018 06:06 PM

Priyanka Sharma

The “line” which keeps me motivated !

IMG_20180804_231511

Whenever you will feel less energized, demotivated just ask one question to you-

“What is stopping you ?”

The answer that comes to your mind is- “Nothing is stopping me !” Always remember one thing that someone in the world has already faced and overcame from the problem you may facing. So get up and ask few questions to yourself and say to you that-

  • Do I have the Potential ?

Yes definitely I have and why not ! I am capable of development into actuality.

  • Do I have the Courage ?

Yes, I have the strength to venture, persevere and withstand danger, fear or difficulty !

  • Do I have any reason of not making things happen and not giving my as best as I can ?

Definitely, I don’t have any excuse of not doing the best !

 

 

 

by priyanka8121 at August 07, 2018 03:36 PM

August 06, 2018

Bhavesh Gupta (BhaveshSGupta)

#Readingeveryday Day-1

I have been having difficulties in making reading a habbit so my friend Jason, introduce to something which is called #readingeveryday challenge to start reading here is the link. So this post and goes to that today is the day 1, rightnow as I am starting I am reading 15 min a day. Being a slow Reader I was able to read 6 pages of the book which I am currently “Think and Grow Rich” by Napoleon Hill.

August 06, 2018 10:33 AM

Pradhvan Bisht (pradhvan)

Grokking Algorithms

There are some books that you suggest to every person who asks you ” How should I start with … ?” Well being a recent graduate of yet another engineering college I have been asked a few times, “How to start with programming? Can you recommend some books ?” to those people I generally recommend this book with Python for you and me followed by “Jokes on you mate, I am as clueless as you are about programming 😛

To justify why I recommend this book to everyone who is starting with programming; here is my short review and if you want a one-liner I have added the TL;dr version too.

Grokking_algo

TL;dr When a comic book meets a programming book the result is grokking algorithms.

I have broken down the review into very basic pros and cons list that should give you the idea of the entire book. So starting with the pros:

1. Visualization is the key, one of the book’s strength lies in the neat sketches you get on every page with the unique way of storytelling. So while you’re busy figuring out the story and looking at the sketches, the author quietly passes those computer science fundamentals that would be taught in a boring lecture, which seems so fascinating and easy while reading the book.  For example, in the hash tables chapter, there is this one character  Maggie who work in the grocery store and knows all the product prices. So instead of searching the price catalog, the cashier can instantly check the price by asking her and we can say as in fancy computer science term, she can give the price in O(1) times for any item no matter what!

Maggie

One more example I could think of it the way it relates folding paper with binary search where the search space is divided into half just as we fold a paper in half every time we fold a paper.

B-search

The book is filled with such similar stories focusing on small but important concepts.

2. Short and crisp exercises, after every chapter there is a recap of major points and a small exercise of hardly 10 questions which are a blend of code and theory concepts and are enough to practice what you learned in the previous chapter. Plus the code is available on the GitHub repo and the answers to the theoretical concepts are printed at the end of the book. So it saves time and energy of searching for solutions online.

3. Great supplement books, although it is a good stand-alone book for a beginner programmer who is just starting out with python and is interested in algorithms but I believe this book will be best suited if you read it with some standard book because it majorly focuses on the concept and somewhat fails to go in much depth if you’re taking up a course of Algorithms in your college or you want to know more about a particular topic like hash tables, greedy algorithms etc.

As much as I love this book there are some cons to it too 😦

1. Expensive, the major drawback I found is that this book is a bit expensive, for a freshman it’s fine because he is yet to learn a lot and this book offers a good head start but for an intermediate user it’s still a big deal.

2. Too basic for intermediate programmers, I picked up this book in my last year of engineering when I had a basic understanding of the concepts explained in the book so I wasn’t that amazed what the book has to offer as it lacks the depth in the topics covered but I was amazed at how easily they were explained.

The above six points cover all the major section of the book and should give a brief idea about the book of what I thought it’s strength are and what it lacks.

Book Rating: 3.5/5

So if you are starting out as a freshman in college, starting an algorithms course or are just curious about algorithms with a decent saving left around that you usually spend on partying 😛 give this book a shot, you won’t be disappointed.

 

by Pradhvan Bisht at August 06, 2018 08:07 AM

August 05, 2018

Aman Verma (nightwarriorxxx)

Week6 -Day1 and Day 2

“It takes as much energy to wish as it does to plan.”

-Eleanor Roosevelt

Week 6 Day1 and Day2 were awesome. I went to two meetups each day.

The first one was Foss Mega Meetup in Adobe Systems.The talks given by people were like awesome. The schedule was as follows:-
1. How to Learn Effectively
2. Akka Reactor
3.Panel Discussion
4.Hiring and Pitching sessions
As it was my first meetup I felt amazing and was introduced to great people working on different things. These amazing people with great skill set not only clear my doubts but also gave me advice how to solve every problem and cleared my confusions also.

The second was Linux Chix India meetup at IIIT-D. Another great event organised by again great people with great skill set.The schedule was like:-
1.Bash for Beginners
2.Chaos Engineering Session
3.How to contribute of Open Source
The “Chaos Engineering” session was just phenomenal. It’s like you break the system and then others make the system. Seriously, I learned a great alot.

Happy hacking

by nightwarrior-xxx at August 05, 2018 05:51 PM

Ratan Kulshreshtha

DevConf.in 2018

I attended Devconf.IN which is the first annual Developers’ Conference to be organised by Red Hat at Christ University in Bengaluru, India held on 4-5 August 2018. Around ~1323 attendees attended Devconf.IN 2018 along with 110 speakers. There were around 14 parallel tracks (Agile, Blockchain, Cloud and Container, Community, Design, Developer Tools, DevOps, IOT, Machine Learning, Middleware, Platform, QE, Security, Storage) and BOFs, workshops so pretty much completely packed schedule.

August 05, 2018 05:13 PM

Priyanka Sharma

Algorithms: Heart of Computing

Before there were computers, there were algorithms. What are algorithms? Why is the study of algorithms worthwhile? What is the role of algorithms relative to other technologies used in computers?

These are some questions that came to the mind. So let’s begin !

What are algorithms ?

Informally, an algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output. An algorithm is thus a sequence of computational steps that transform the input into the output. We can also view an algorithm as a tool for solving a well-specified computational problem.

What kinds of problems are solved by algorithms?

Some of the practical applications of the Algorithms are:

  • The Human Genome Project has made great progress toward the goals of identifying all the 100,000 genes in human DNA, determining the sequences of the 3 billion chemical base pairs that make up human DNA, storing this information in databases, and developing tools for data analysis. Each of these steps requires sophisticated algorithms.

download (1)

  • The Internet enables people all around the world to quickly access and retrieve large amounts of information. With the aid of clever algorithms, sites on the Internet are able to manage and manipulate this large volume of data. Examples of problems that make essential use of algorithms include finding good routes on which the data will travel and using a search engine to quickly find pages on which particular information resides.

download (2)

  • Electronic commerce enables goods and services to be negotiated and exchanged electronically, and it depends on the privacy of personal information such as credit card numbers, passwords, and bank statements. The core technologies used in electronic commerce include public-key cryptography and digital signatures, which are based on numerical algorithms and number theory.

download (3)

Algorithms as a technology

Suppose computers were infinitely fast and computer memory was free. Would you have any reason to study algorithms? The answer is yes, if for no other reason than that you would still like to demonstrate that your solution method terminates and does so with the correct answer. If computers were infinitely fast, any correct method for solving a problem would do. You would probably want your implementation to be within the bounds of good software engineering practice (for example, your implementation should be well designed and documented), but you would most often use whichever method was the easiest to implement. Of course, computers may be fast, but they are not infinitely fast. And memory may be inexpensive, but it is not free. Computing time is therefore a bounded resource, and so is space in memory. You should use these resources wisely, and algorithms that are efficient in terms of time or space will help you do so.

So, this is all the small introduction to Algorithms. 

by priyanka8121 at August 05, 2018 04:34 PM

Abdul Raheem (ABD)

Session on PYM book and guest sessions

Hello, world!

Sorry for this late blog again, it’s because of my external exam and one update is that there will be no blog’s coming for about 10-20 days due to my external exams sorry for this excuse, anyways hope you are enjoying my blogs and you will enjoy this one too…

So on 27-August-2018, we had a session on PYM book by Kushal Das on looping chapter, there we had a game called the game of sticks and he asked everyone to  modify that game code so that every time a  user should win not the computer, so everybody started modifying it (me as well 🙂 ). Some changed the number of sticks to 20 from 21 but in that case, it was like a computer can also win and sometimes the number of  sticks went in negative then a clever answer came into the discussion, it was to change the word lose to win it was given by vishalirc(IRC nick) and the discussion goes on… in this session I got to know how to think differently and how to analyse the code and to think how it works, really got to know some good things by this session, all thanks to Kushal and dgplug :).And as always the homework goes on.

Guest session by Jennifer Helsby (redshiftzero IRC nick)

So who is she? and what does she do? she is currently the lead dev of the secure drop which is a whistleblowing platform at the freedom of the press foundation.

  • she is also the CTO for lucy parsons labs which is a 501c3 non-profit that does investigate journalism and police accountability work
  • previously she worked for data science for social good which she came into after a Ph.D. in astrophysics

and the session goes on with questions and answers. I don’t want to get deep into it as I don’t have much time but I will leave the link for that session do check that link as it was a very good session, You can go through the session over here.

Guest session by Vaishali Thakkar (Vaishali IRC nick)

So again who is she? and does she do? she is currently working as a freelance Linux kernel engineer and co-organiser of RGSoC which is a paid scholarship program for woman and binary people.

  • she is also a volunteer of the Linux kernel co-coordinator for outreach.
  • previously she worked as a kernel engineer at Oracle and Linux kernel intern under outreach program.

and the session goes on with questions and answers. Again I don’t want to get in deep as it will become a very lengthy blog and I don’t have much time as well. Do go through the logs as it was also a good one, not only this two are good every guest session was good and will be good, You can go through the logs over here.

Session on PYM book by our very own Kushal Das

So due to my college work and these bloody exams, I couldn’t attend this session but got to know some good things from PYM book which was given to us by Kushal Das. I have gone through the logs and I saw the question by Kushal which was this

Q:- Say, you have 500 names or 5000 names. and I want you to tell me if the “asdfas” exists in those or not. How will you store the names in your code?

and my answer to this was a list but got to know one more answer which was using a dictionary anyways here are the answers to the above question

1. Using list We can check if a value is there in a list using in. “asdfas” in names Where names is a list of all the names.

2. using dictionary say d = {“name”: True} and we have added all the names as keys. then we can use in again. “asdfas” in d.

You can go through the logs of the Pym session of  3-August-2018 over here.

Happy learning 🙂

by abdulraheemme at August 05, 2018 01:16 PM

August 03, 2018

Bhavesh Gupta (BhaveshSGupta)

Reading Habit

I am trying to develop the habit of reading. Recently while going through #dgplug summer training, there have been discussion about having a habit to read. You write 1/10th part of what you read. Reading and writing both is considered to be one of the most important part of engineer. So trying to follow above I started reading books still non-technical but as a first step I did start. I do a lot of technical reading, but I never write about it so one of the task is to start writting about technical readings be it github projects I am going to try to write about it.

August 03, 2018 10:28 PM

Aman Verma (nightwarriorxxx)

#Week5 Day6

“Don’t blame others as an excuse for you not working hard enough.”

I am little bit confused these days because of my college schedule and that is the main reason I won’t be able to write blog daily and punctually. I know it’s difficult to manage time but at the end of the day I have to plan something to manage my time. Just got an advice from a senior that if will be able to manage time if you have short term goals otherwise you will only waste your time.

Coming to technical part as I have been learning python these days. Now I have moved on to Django.I have started with a project in which an admin can write and posts its blogs. Its always good to start with a project to learn anything. I have a desired aim to contribute to Open Source Community. Hopefully I will!!

Happy Hacking

by nightwarrior-xxx at August 03, 2018 04:54 PM

August 02, 2018

Anu Kumari Gupta (ann)

Internet Relay Chat

Hola folks!

If you are hearing about IRC for the first time and you wish to know about it, don’t worry, you are at the right place. This post should get you started and guide you along. I have covered the basic things that you should know and some useful tips and tricks. Enjoy reading.

What is IRC?

As Wikipedia defines, IRC (Internet Relay Chat ) is an application-layer protocol used for communication in the form of chatting through text. To start talking through IRC, you need to have IRC clients. These basically are applications that needs to be installed or you can do this through browser. Examples of some IRC clients are: XChat, WeeChat, HexChat, etc. that you can use in your PC. You can find the comparison here. Mobile applications include AndroIRC, IRCCloud, HoloIRC etc. Browser based clients are KiwiIRC, Mibbit, etc. Try them all and choose what you like! These IRC clients connect to a server you have set as default in your program settings. Communication is done in channels(discussion forums), and requires certain rules and etiquette which you need to follow.

If you have a question like – Why IRC? – IRC because it is considered as the main communication channel for open source projects. It is the most light-weight platform that can run on even low bandwidth Internet. There are several channels available on several servers, which you can join according to your interests (My favorite the one close to my heart is #dgplug – Linux Users’ group of Durgapur with the motto Learn and teach others:) ). All you need to have is the proper name of the channel, if you wish to join any. Internally, for every message you send in the channel through your client application, it is sent across the internet to the IRC servers, which echoes it to every other user connected to the channel.

Interested? Let’s get accustomed with the basics.

How to get started?

To get started, you ought to follow the following steps :

1. Like I mentioned, to join a particular channel, you have to start by joining a server. Simplest way of doing it is running this command in your IRC client : /server chat.freenode.net You can also do it manually based on the type of IRC client you are using.

2. To start using IRC, you have to have a nickname i.e., the name that will be used to address you. You will be called with this name if you join a channel. In other words, you will be known to the other users with this name that you will provide at the time of registration. Choose a nickname and  type this in the chat head of freenode : /nick YourNick. Note – YourNick is the name given by you, which you wish to have. YourNick should contain lower case alphabets, underscore(_), digits(0-9), or hyphen (-) and a maximum of 16 characters.
If you face an issue like “The nick is already in use”. It means you have to choose a different nick because it is already been taken.

3. Next step is to register your nick because you would like to have this as permanent and restrain other people from using your nick which you choose. To register, type the following in the freenode chat head :/msg nickserv register password email.Password is the password you wish to set for your nick to protect your nick and claim your identity. Email is the email that you need to give for confirming the password that has been set up.

4. When you receive the confirmation mail, do the specified instructions mentioned in your mail, in the chat area of freenode.

5. After you have verified your nick, you are all set to go ahead. If you ever get disconnected, you can identify yourself (similar to sign-in option that you use in other platforms) using: /msg nickserv identify YourNick Password.

6. Now that you have a proper identity, your task is to join a channel i.e., your main purpose on being in IRC. There are several channels that you can join and discuss. To join a channel, just write /join #channel_name. In place of channel_name, you have to write the proper name of the channel existing on the server.

Do’s and Dont’s

Unlike any other platform, there are certain norms that you must keep in mind while using IRC:

  • Type in full proper English sentences.
  • Be polite and gentle.
  • Be patient specially when nobody answers you or you disagree with someone on a particular context.
  • Remember IRC channels are very useful in gaining knowledge and you happen to see knowledgeable, renowned people, specialists in a particular domain hanging around in channels. So, you should be specific and meaningful in the questions you ask.
  • Do not act foolish.
  • Do not use any slang. Don’t swear. Don’t be obnoxious.
  • Do not use ALL CAPS TO TYPE – it is like you are shouting at someone.
  • Do not flood the channel i.e, sending too many texts all at once

The above are some of the basic requirements that you need to keep in mind before speaking up in any channel. Any intolerance or wayward behaviors, could get you kicked out or banned permanently from a channel. Yes you heard that right!

Utilities of IRC

IRC is very useful, if you use it the right way. Make the most of IRC by learning from other people’s issues, exploring stuff, finding problems to your solutions by reaching out to people in that domain, finding projects and mentors, etc. I leave it to you to find more 🙂

Talking about some other basic functionality, If you are on IRC (Internet Relay Chat), you get certain services known as IRC services. You use this to modify and/or add  functionality to your account. These are basically a special type of bots that has several statuses and flags for you to set. The most common services that you will find are:

  • NickServ – a nickname service bot. You might have observed that while registering you messaged NickServ with your email and password. It has other functions too.

Type /msg NickServ help and you are sure to get the following commands:

SET                               – Sets various control flags
UNGROUP                   – Removes a nickname from your account
Other commands     – ACC, ACCESS, CERT, DROP, HELP, LISTLOGINS, LISTOWNMAIL, LOGOUT, REGAIN, SETPASS, STATUS, TAXONOMY, VACATION, VERIFY

If you do not know what these commands are for, you can try to seek individual help by typing in: /msg NickServ help <command>. There may be some sub-commands within a command, like in case of SET command, if you type /msg NickServ help SET, you get to see the following set of sub-commands:

ACCOUNTNAME – Changes your account name
EMAIL                   – Changes your email address
EMAILMEMOS    – Forwards incoming memos to your email address
ENFORCE              – Enables or disables automatic protection of a nickname
HIDEMAIL            – Hides your email
NEVERDROP        – Prevents you from being added to access lists.
NOMEMO              – Disables the ability to receive memos
NOOP                     – Prevents services from settings mode upon you automatically
PASSWORD          – Changes the password associated with your account
PRIVATE               – Hides information about you from other users
PROPERTY           – Manipulates metadata entries associated with an account
PUBKEY                – Changes your ECDSANIST256p-CHALLENGE public key.
QUIETCHG           – Allows you to opt-out of channel change messages.

Now to see more of what each sub-commands offers you, you can type /msg NickServ help <command> <sub-command>. Example – /msg NickServ help SET HIDEMAIL.

Do try and configure all the necessary flags that is essential for you. One such useful is ENFORCE that ensures your nick is protected and it automatically changes the nick of the someone who attempts to use yours. Another such useful flag is PRIVATE that hides your information.

  • ChanServ – a channel service bot. It provides the status of the channel i.e, it maintains the basic information about the channel, like when a user joins or leaves a channel. It is set by the channel operators/admins. Normal users have no access to ChanServ. ChanServ provides several helpful services like kicking the user, banning the user, change the channel topic, etc.

You can obtain the info of a particular channel by : /msg chanserv info #channel. The information will contain the following: founder of the channel, timestamp and the date of registration of the channel, the status of the channel and the flags.

  • MemoServ –  a memo service bot, is used to record and deliver messages to users who are currently offline.
  • OperServ is used by IRC operators to perform administrative functions. Also known as AdminServ, RootServ, or OpServ (srvx).

Some others are also seen following this naming convention in some services packages including:

  • BotServ, a bot which allows channel operators to assign bots to their channels. These bots are mostly a ChanServ representative in the channel.
  • HelpServ, the help service bot, is used to provide help on a variety of IRC topics.
  • HostServ, a host service bot, is used to allow general users to register a vhost so that their real IP address on IRC is hidden.
  • RootServ, used on specific networks, is utilized by Services Root Administrators to perform Root Administrative functions for the IRC Network and Services Program.
  • SpamServ, used to protect channels against spam.[1]
  • StatServ, a statistic services bot, is used to perform various statistical analysis on the IRC Network.

[Reference URL : https://en.wikipedia.org/wiki/IRC_services under heading : Components]

Useful commands

Now that you have figured out the basic commands and their usage, there are several other commands and options available in IRC like the following:

/LIST – It lists all the available channels on the server. The list is really long.

/WHOIS <nick> – It shows information about the specified nick.

/NAMES #channel – It lists all the users on that channel.

/AWAY <message> – To denote the users that you are not using IRC at the current moment, with a message. However, /AWAY can follow no message also.

/QUIT <message> – To quit the IRC and the current channel with a message.

/ME <message> – To denote any action of yours. Example- /me is happy to join dgplug will appear as ann is happy to join dgplug (ann is my nickname. If you use this, your nick will appear instead of ann)

/QUERY <nick>– To open a seperate window for privately messaging the specified nick.

You can find more on this here.

Modes in IRC

I have covered most of the basics that you need to know about IRC. Another important concept that is used in IRC is that of channel modes used by channel operators and user modes that is used by users/participants. These modes have different use cases and functionalities. You can check the several possible modes here.

To set user mode, you need to type in the chat head of freenode: /mode nickname +/-mode. So, for example, you wish to set +R mode, which ignores private messages from users who are not identified with services, you have to type in /mode nickname +R. Similarly, to remove this feature, you can do /mode nickname -R.

Channel operators can set modes accordingly with the help of available channel modes. These modes acts like core settings of a channel. One such example is +r, which prevents users who are not identified to services from joining the channel.

If you as a user wish to see the modes set by a particular channel, you can check by: /mode #channel.

Facts and Figures

Hope you got a glimpse of IRC and enjoyed knowing about it. Unfortunately, according to the statistics, the usage of IRC declined by 60% after the social networking platforms like Facebook, Twitter, etc came into being.

Personally, I like IRC very much and I find it more useful than any other social platform. I  am sure you will feel the same once you try it. Go forth and conquer IRC!

 

Special thanks to Jason Braganza for editing and correcting my grammar.

by anuGupta at August 02, 2018 10:39 PM

August 01, 2018

Shiva Saxena (shiva)

How to create bootable USB using CLI

Hey! I was curious about if we can create a bootable USB drive from Command Line Interface alone. I always have my utmost trust in CLI but this task sought to be tricky. Therefore, I set myself relaxed and dived to search a solution on duckduckgo.

Surprisingly, I found the solution much sooner than my expectation. Following is a simple, efficient and fast way to create a bootable USB drive only from your command line.

Note: I am using my Ubuntu 16.04 LTS in this tutorial. You may continue with other GNU/Linux distros as well. Months back I wrote the post on How to make a Bootable pen drive using Rufus? feel free to have a look on it (in case you want to create bootable USB from windows.)

Requirements

  1. Any GNU/Linux distros.
  2. A formatted USB drive.
  3. ISO image of Operating System.

Step by step

1. Have a look at existing devices

1. Use the command lsblk to see the list of existing block devices in your system

$ lsblk
NAME  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda     8:0    0 298.1G  0 disk 
├─sda1  8:1    0  11.4G  0 part 
├─sda2  8:2    0   350M  0 part 
├─sda3  8:3    0 114.2G  0 part 
├─sda4  8:4    0     1K  0 part 
└─sda5  8:5    0 172.1G  0 part /
loop0   7:0    0  86.9M  1 loop /snap/core/4917
loop1   7:1    0  86.9M  1 loop /snap/core/4830

2. Identify your device

  1. Plug-in your USB device.
  2. Use lsblk again to identify your device. In my case it is sdb->sdb1
$ lsblk
NAME  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda     8:0    0 298.1G  0 disk 
├─sda1  8:1    0  11.4G  0 part 
├─sda2  8:2    0   350M  0 part 
├─sda3  8:3    0 114.2G  0 part 
├─sda4  8:4    0     1K  0 part 
└─sda5  8:5    0 172.1G  0 part /
sdb 8:16 1 7.5G 0 disk
└─sdb1  8:17   1   7.5G  0 part /media/shiva/myusbdrive 
loop0   7:0    0  86.9M  1 loop /snap/core/4917
loop1   7:1    0  86.9M  1 loop /snap/core/4830

3. Unmount your USB

To make your USB bootable, first, you need to unmount it.

$ umount /dev/sdb1

If the command above doesn’t work try it with sudo.

Note: Make sure you are selecting the correct device.

Make sure that the device is unmounted using lsblk command again, and notice that the MOUNTPOINT for your device has been removed.

$ lsblk
NAME  MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda     8:0    0 298.1G  0 disk 
├─sda1  8:1    0  11.4G  0 part 
├─sda2  8:2    0   350M  0 part 
├─sda3  8:3    0 114.2G  0 part 
├─sda4  8:4    0     1K  0 part 
└─sda5  8:5    0 172.1G  0 part /
sdb 8:16 1 7.5G 0 disk
└─sdb1  8:17   1   7.5G  0 part 
loop0   7:0    0  86.9M  1 loop /snap/core/4917
loop1   7:1    0  86.9M  1 loop /snap/core/4830

4. Make it bootable

To make a bootable USB. In GNU/Linux we have a simple tool: dd command:

$ sudo dd bs=1M if=/path/to/os_image.iso of=/dev/<device> conv=fdatasync

Here:
<device> should be replaced with your device name. In my case it is sdb.

NOTE: <device> will be replaced with your device root name in my case it is “sdb” not “sdb1”. If you go with “sdb1” then your device might not be able to boot. So in my case, the command would be like:

$ sudo dd bs=1M if=~/Downloads/Fedora/Fedora-Workstation-Live-x86_64-28-1.1.iso of=/dev/sdb conv=fdatasync
[sudo] password for shiva: 
1705+0 records in
1705+0 records out
1787822080 bytes (1.8 GB, 1.7 GiB) copied, 309.198 s, 5.8 MB/s

It may take a while to complete. You may understand different attributes in this command from its help menu $ help dd.

5. Have fun!

Believe it or not but your USB is now bootable. I tried this method to make my bootable USB to install Fedora28. And my device is good to go. I tried to run Fedora in try mode first, and soon going to install it.

Thanks to Hayk Gevorgyan for his post: https://linoxide.com/linux-how-to/create-bootable-ubuntu-usb-flash-drive-terminal/

Thanks for reading!

by Shiva Saxena at August 01, 2018 04:12 PM

July 31, 2018

Manank Patni

Day 26

Have started a solving a programming exercise question from this Github repository. I have taken a task of 5 questions per day and will be pushibg these to my Github Account. This programming exercise has 3 levels of difficulty from Beginner to Advanced. So I will have a good grasp over Python Language.

by manankpatni at July 31, 2018 06:00 PM

Prashant Sharma (gutsytechster)

Get to know reStructuredText (reST)

While documenting our project, we often use Markdown syntax and save the file with .md extension. But have you ever noticed any repository which has something different? Yeah! in README file. Do you always see .md extension? Well, I first encountered with reStructuredText when I saw a repository containing .rst extension for README instead of .md. I was bit confused about what could it be, so I started searching about it and my curiosity lead me to the concept of reST. So, does it mean that Markdown and reST are same? Well, this would be like asking which flavor of lays is better. Is it Mango Salsa or Classic one(I like Classic one!)? Of course that depends on user and their use-case.  So is here.

However, the main focus of Markdown is on the static web pages without much formatting and that’s where it shines. Similarly, the main focus of reST is to write documentation in a what-you-see-is-what-you-get format. But there is no such restriction, one can use any of them in any way. reST requires two python packages to get interpret. You can install them using pip as

pip install python3-docutils
pip install python3-sphinx

These two packages provide rst2html utility, which can be used to convert an rst to a html file. So suppose you create a file and name it as practice.rst which would contain the reStructuredText, but it won’t be able to display the HTML on the browser. For that you would need to do something like this:

rst2html practice.rst practice.html

Now, you have the equivalent html file. It can now be used to display the content on the browser. You can convert the rst file into various formats using different utilities.

Note:- After each change in rst file, you’ll need to generate the HTML file with same process as told above.

reST Syntax

reST has simple and easy syntax, so as for understanding. Then let’s get familiar with its functionalities:

Paragraphs

Paragraphs are the group of text separated by one or more blank lines. All lines of same paragraph must be aligned at same indentation level.

Inline Markup

Often we require to highlight some specific part of text. For that we use inline markups.

*I am italic*
**I am bold**
``I have a code sample inside me``

They are as simple as they look. However, they have a few restrictions like they can’t be nested within one another. Also, they must be surrounded by space. For eg

These must be *surrounded* by space.

Note the spaces before and after asterisk.

Lists

We use lists in our documentations very often. So to use list, just prefix the list item with an asterisk(*). Do indent all the list item at same level. This will produce an unordered list. In order to produce an ordered list, prefix the list item with the number followed by a period. Well, there is something new here ie you can use # to autonumber the list.

* LIST ITEMS

  * Nested list item
  * Another nested list item

1. ORDERED LIST

  #. Autonumber list
  #. This item will already autonumbered.

You can use nested list as shown in above example, just make sure to give blank lines before and after the parent list element.

Definition List

These are the relative indented text. One can use them to define terms and their description which can span multiple paragraphs. Just make sure to give same indentation to all the lines under the description block.

term( upto a line of text)
    Definition of the term, which must be indented and

    Can even span multiple paragraphs

Blockquotes

They can be created by just indenting them more than the surrounding paragraphs.

This is a normal text in a paragraph
   This is a part of blockquote
This is not part of blockquote as it is not indented at same level.

Line Blocks

These blocks are used to give line breaks as we need.

  | These lines will be
  | broken exactly as
  | it seems

Literal Blocks

The blocks are used to escape any special meaning of particular symbols. These can be created by ending a paragraph by the special symbol “::“. The whole block must be indented and separated by blank lines from surrounding text.

This is a normal text paragraph ::

    This is a part of literal block.
    All lines at this indentation level won't be processed in any way.

    It can span multiple paragraphs

This is again part of normal text paragraph.

The marker has some special functionality. If the marker is preceded by a whitespace, the marker would get removed and if it is preceded by  a non-whitespace, the marker would get replaced by a colon(:)

Doctest Blocks

These are the interactive python session pasted into text. These are used to show the example pieces and do not require literal blocks syntax to be processed. These must end with a blank line.

>>> 2 + 3
5

Tables

To create a tabular structure in documentation, reST provides two ways to create tables. Simple tables are easy to create however with some restrictions ie they must contain more than one row and first column cells can’t contain multiple lines. They look something like this:

====== ======= =========
A       B       A and B
====== ======= =========
True    True    True
False   True    False
True    False   False
False   False   False
====== ======= =========

However to create more complex table, we need to draw the grid structure by ourself. It might seem difficult at first but it’s not that difficult. This would look something like this:

+--------------------------+----------+----------+---------+
|Header row, column 1      | Header 2 | Header 3 | Header 4|
|(Header rows are optional |          |          |         |
+==========================+==========+==========+=========+
| body row 1, column 1     | column 2 | column 3 | column 4|
+--------------------------+----------+----------+---------+
| body row 2               |    ..     |    ..   |    ..   |
+--------------------------+----------+----------+---------+

That’s it. Believe me, I have also made this by myself. The headers are separated by other fields through “=” sign. We will see about precedence of different symbols in a few moments. Keep reading.

Links

We sometimes need to refer to links in our documentation. Using links are easy in reST.

We can use inline links like this:

`Link Text <https://www.example.com>`_

Note the underscore at the end. The links are wrapped in the backticks followed by an underscore. One can also use target link as:

This paragraph contains a `target link`_.
.. _target link: https://www.example.com

Don’t forget to give the space between dots and the underscore.

Headings/Sections

Headers are created by underlining and overlining(optional) the header text by any of the punctuation character. There are few characters which can be used to mark headings. However, the following convention is generally followed as it is used in python style guide:

# with overline, for parts
* with overline, for chapters
=, for sections
-, for subsections
^, for subsubsections
", for paragraphs

But there is no such restriction. You can use any header for your purpose, just make sure you remain consistent throughout the document. For eg

=================
This is a heading
=================

---------------------
This is a sub-heading
---------------------

Images

We can insert images in our documentation through the following syntax:

.. image:: path/to/image

Again, don’t forget to give space after the periods. It’s pretty easy, isn’t it?

Footnotes

Including footnotes are also pretty simple in reST. Just do the following:

This is footnote 1 [#]_ and this is footnote 2 [#]_.

.. [#] First footnote
.. [#] Second footnote

I have used auto-numbered footnote. You can use [1]_ and [#f1]_ also. That totally depends on user. Footnotes contains labels which are either numeric or start with #.

Comments

Comments are the important part in any language whether it is small or big. They help to understand what does the particular section do and what options can be added and where. In reST, one can give comments as:

.. This is a comment.

You can also write multi-line comments as

..
   This is a multi line comment

   It can span along paragraphs.
   just make sure that there should be same;
   indentation level throughout the comment section. 
This is not a comment.

References:

  1. http://www.sphinx-doc.org/en/master/usage/restructuredtext/basics.html
  2. http://docutils.sourceforge.net/docs/user/rst/quickstart.html
  3. https://dgplug.org/irclogs/2012/rst-primer/

Conclusion

These are just few basics of reST. There is more to it. I myself don’t know much yet. Though as soon as I would learn, I’ll definitely gonna come up with another blog post for this. Till then…

Be curious and keep learning.

by gutsytechster at July 31, 2018 05:44 PM

July 30, 2018

Anu Kumari Gupta (ann)

J. P Barlow ~ EFF

February 7, 2018 was the dark day of our lives, when John Perry Barlow passed away at the age of 70 due to heart attack in San Francisco. John Perry Barlow, a co-founder of Electronic Frontier Foundation and the Freedom of the Press Foundation, political activist, lyricist for  Grateful Dead,  championed the ideals of free and open Internet. He had a vision to change our view of how we can use Internet. He came around in cyberspace in 1985 and in 1996, his  Declaration of the Independence of Cyberspace goes like this:

Governments of the Industrial World, you weary giants of flesh and steel, I come from Cyberspace, the new home of Mind. On behalf of the future, I ask you of the past to leave us alone. You are not welcome among us. You have no sovereignty where we gather.

This was in defense to the Internet, aimed at the government. In this declaration, he fought for protecting the Internet against the cruel impositions and laws from the government of United States. If you go through the entire 16 short paragraphs of his declaration, you would come to know his efforts and strong desire to appeal to the government to make all the difference. This was not only the appeal and his strife against the government, but a message to all those people about their rights and the way they use the Internet. It was his dream that anyone should express his or her beliefs  anywhere, without coercion.

In his keynote of PyCon – 2014, he talked about the cyberspace that needed reform since the space  was held upon a faint idea of culture. After he joined in the cyberspace,  he imagined a “new substrate for the community to form it”.  He told that Internet needs to be spread everywhere and “it was about the connection and not separation” and the content that needs liberation. He had an aspiration of creating a system, in which people can voice their thoughts freely, that doesn’t necessarily needs to be heard by everyone.

He believed in giving voice to the people and explained it by giving examples, like, if he had world’s largest diamond in his pocket, he could be curious or not of people not knowing it, but it is still valuable. Likewise, if a song is running in his head, it will be useless or valueless only after he let it be known. It was due to the fact, that the concert did by his band was taped by many people and he did not stop people from doing this, although, initially they thought that people are stealing and they should stop them to do so. He did this because he believed that it was going to be the “effecting system for spreading word about what they did”. He told about the culture existing in the different languages- there are TDX people, UNIX culture, Python. He talked about how government can use their power “to surveil people in a way that they have never done before”. He told how he almost always have a conversation with Ed Snowden, on how the government can’t stop even if they make something so that they do not happen to talk to each other. He dreamt that “right to know” should be considered as a natural human right and must be applicable to everyone who is being surveiled about the know-hows of what the government is doing  and the reason they are doing so.

Being that old, he still had the enthusiasm in himself and he desired to do a lot more. Truly, it is inspiring and at the same time encouraging to know about his avidity in his aspiration. He had a huge hope on us because we are the future and it is we who can shape up the politics and the technical architecture.

I was enthralled by the keynote.

Barlow is still alive and it’s living in his ideals that he set up. It’s on us now how to continue the legacy because his efforts are priceless. Watch the John Perry Barlow synopsium by the Internet Archives.

by anuGupta at July 30, 2018 09:33 PM

Kumar Vipin Yadav (kvy)

Inbuilt Functions In C

Inbuilt function for String Processing :-

NOTE : All Inbuilt Function of String need string.h library.

1. strcpy(Target_String , Source_String ) :-

This function will take 2 string as argument one is Target_String and another is Source_String,
this function will copy Source_String into Target_String.
e.g.

#include<stdio.h>
#include<string.h>
int main()
{
	char A[15] = "Vipin Yadav";
	char B[15] = "- - - - -";

	printf("Value of A and B before calling function. \n\n");
	puts(A);
	puts(B);

	strcpy(B,A);
  // It will copy A in B as you can see in Output.

	printf("\nValue of A and B after calling function. \n\n");
	puts(A);
	puts(B);

  return 0;
}

Output:-

Value of A and B before calling function. 

Vipin Yadav
- - - - -

Value of A and B after calling function. 

Vipin Yadav
Vipin Yadav

2. strncpy(Target_String , Source_String ) :-

This function will take 2 string as argument one is Target_String and another is Source_String,
just like strcpy() this function will copy n letters from Source_String into Target_String,
and it do not affect reaming part of string.
e.g.

#include<stdio.h>
#include<string.h>
int main()
{
	char A[15] = "Vipin Yadav";
	char B[15] = "- - - - -";

	printf("Value of A and B before calling function. \n\n");
	puts(A);
	puts(B);

	strncpy(B,A,5);
  // It will copy A in B but upto n characters and don't disturb other values of B.

	printf("\nValue of A and B after calling function. \n\n");
	puts(A);
	puts(B);

  return 0;
}

Output:-

Value of A and B before calling function. 

Vipin Yadav
- - - - -

Value of A and B after calling function. 

Vipin Yadav
Vipin - -

3. strcat(Target_String , Source_String ) :-

This function is used to concatenate Source_String just after Target_String or we can say to append Source_String with Target_String.

e.g.

#include<stdio.h>
#include<string.h>
int main()
{
	char A[20] = " Vipin Yadav";
	char B[20] = "-->";

	printf("Value of A and B before calling function. \n\n");
	puts(A);
	puts(B);

	strcat(B,A);
	// It will append B with A

	printf("\nValue of A and B after calling function. \n\n");
	puts(A);
	puts(B);

  return 0;
}

Output:-

Value of A and B before calling function. 

 Vipin Yadav
-->

Value of A and B after calling function. 

 Vipin Yadav
--> Vipin Yadav

4. strncat(Target_String , Source_String ) :-

This function is used to concatenate Source_String just after Target_String or we can say to append Source_String, with Target_String but here we can limit that how much letters you want to append.

e.g.

#include<stdio.h>
#include<string.h>
int main()
{
	char A[20] = " Vipin Yadav";
	char B[20] = "-->";

	printf("Value of A and B before calling function. \n\n");
	puts(A);
	puts(B);

	strncat(B,A,5);
	// It will append B with A upto 5 le tters

	printf("\nValue of A and B after calling function. \n\n");
	puts(A);
	puts(B);

  return 0;
}

Output:-

Value of A and B before calling function. 

 Vipin Yadav
-->

Value of A and B after calling function. 

 Vipin Yadav
--> Vipi    ( Don't mess with ' ' before V :;)

5. strcmp( First_String , Second_String ) :-

This function will take 2 Stings as argument and return a>0 value if First_String is,
greater( not on the basic of length 🙂 ) and return <0 if Second_String is greater and
return 0 If both are Equal.

NOTE : strcmp() IS CASE SENSITIVE
e.g.

#include<stdio.h>
#include<string.h>
int main()
{
	char A[20] = "aaaa";
	char B[20] = "AAAA";
	int x;

	x = strcmp(A,B);

	if ( x == 0 )
		printf("Both Stings are Equal.");
	else if ( x == 1 )
		printf("First Stings is greater.");
	else // mean strcmp() return -1
		printf("Second Stings is greater.");

  return 0;
}

Output:-

Second Stings is greater.

5. stricmp( First_String , Second_String ) or strcmpi( First_String , Second_String ) :-

This function will take 2 Stings as argument and return a>0 value if First_String is,
greater( not on the basic of length 🙂 ) and return <0 if Second_String is greater and
return 0 If both are Equal.

NOTE : stricmp() or strcmpi() IS NOT CASE SENSITIVE
AND THIS FUNCTION IS NOT FROM STANDERD LIBRARY OF C
            LANGUAGE   SO IT WILL NOT WORK IN SOME
COMPILERS LIKE IN LINUX/UNIX .

e.g.

#include<stdio.h>
#include<string.h>
int main()
{
    char A[20] = "aaaa";
    char B[20] = "AAAA";

    int x;

    x = strcmpi();

    if ( x == 0 )
        printf("Both Stings are Equal.");
    else if ( x == 1 )
        printf("First Stings is greater.");
    else // mean strcmp() return -1
        printf("Second Stings is greater.");
    
    return 0;
}

Output:-

Both Stings are Equal.

6. strlen( String ) :-

This function will take a String as argument and return it’s length.

e.g.

#include<stdio.h>
#include<string.h>
int main()
{
    char A[20] = "aaaa";

    int x;

    x = strlen(A);
    
    printf("Length of A is %d.",x);

    return 0;
}

Output:-

Length of A is 4.

7. strlwr( String ) :-

This function of C language will convert all alphabets of String in lowercase.

NOTE: THIS FUNCTION IS NOT FROM STANDARD LIBRARY OF C LANGUAGE SO IT WILL NOT WORK IN SOME COMPILERS LIKE IN LINUX/UNIX .

e.g.

#include<stdio.h>
#include<string.h>
int main()
{
    char A[20] = "AAAA";

    strlwr(A);
    
    puts(A);

    return 0;
}

Output:-

aaaa

8. strupr( String ) :-

This function of C language will convet all alphabets of String in uppercase.

NOTE: THIS FUNCTION IS NOT FROM STANDARD LIBRARY OF C LANGUAGE SO IT WILL NOT WORK IN SOME
COMPILERS LIKE IN LINUX/UNIX .

e.g.

#include<stdio.h>
#include<string.h>
int main()
{
    char A[20] = "aaaa";

    strupr(A);
    
    puts(A);

    return 0;
}

Output:-

AAAA

9. strset( String , character ) :-

This function will change hole string with a character you give,
means It take 2 argument a string and a character and replace hole string with that character.

e.g.

#include<stdio.h>
#include<string.h>
int main()
{
    char A[20] = "Vipin";
    char C = 'V';

    strset(A,C);
    
    puts(A);

    return 0;
}

Output:-

VVVVV

10. strnset( String , character , n ) :-

This function will change hole string with a character you give but upto a limit,
means It take 3 argument a string and a character and a integer and replace hole string with that character,
till n .

e.g.

#include<stdio.h>
#include<string.h>
int main()
{
    char A[20] = "Vipin";
    char C = 'V';

    strnset(A,C,3);
    
    puts(A);

    return 0;
}

Output:-

VVVin

11. strspm( String1 , String2 ) :-

This function will take 2 string as argument and return the number of characters
in the initial segment of String1 which consist only of characters from String2.

e.g.

#include<stdio.h>
#include<string.h>
int main()
{
    char A[] = "Vipin is my name";
    char C[] = "Vipin";
    int x;

    x = strspn(A,C);
    
    printf("String C matches In String A till %d.",x);

    return 0;
}

Output:-

String C matches In String A till 5.

One function is there name strstr()
we will learn it when we learn about pointers.

by kumar vipin yadav at July 30, 2018 06:29 PM

Pradhvan Bisht (pradhvan)

How to setup wifi drivers for RTL8723be network adapter for Manjaro/Arch linux

For a long time I have been facing wifi connectivity issue with my system, I used to get weak and unstable wifi signals with me sitting literally next to my router. When I looked at the issue online I came to realize every laptop that had the network adapter RTL8723BE was facing this issue or even worse they had no wifi signal.

Things I tried to solve this problem:

1) Hopped Linux distros

2) Downgraded my kernel version

Things I learned quick, it was a driver issue and not anywhere related to a particular distribution, so trying out different Linux distributions might not solve the problem. The hit and trial method were to manually downgrade the Linux kernel and check which one has the working wifi connection and stick to it. This approach was a budge and came with certain consequences.

The best approach is to manually download the drivers, locally build them in your machine and install them. You can get the latest drivers from this link.

$git clone https://github.com/lwfinger/rtlwifi_new

$cd rtlwifi_new

TIP: Before going forward make sure you have the Linux headers, I am using kernel version 4.14 which is the default for Manjaro 17. You can easily get them by

 $sudo pacman -S linux414-headers

Now you have the all the requirements, let’s build the driver 😛

$make clean && make

$sudo make install

$sudo mkinitcpio -P

This might take some time so be patient and if done successfully, reboot your system. After rebooting you need to edit the configuration file.

$vim /etc/modprobe.d/rtl8723be.conf

If this file is present good, else create a file at /etc/modprobe with the file name of rtl8723be.conf and add the following in the file

options rtl8723be fwlps=0 ant_sel=x

x is the antenna number either 1 or 2 for now let’s keep it 2, it worked best for me when it was 2.You can check by

$sudo rmmod rtl8723be

$sudo modprobe rtl87233 ant_sel=1

$sudo rmmod rtl8723be

$sudo modprobe rtl87233 ant_sel=2

Whichever works and gives you the best result add that to X. Hope this helps 🙂

 

by Pradhvan Bisht at July 30, 2018 03:57 PM

July 29, 2018

Manank Patni

Day 25

Read some chapters from the PYM Book. The book is great has some really good python tricks that aren’t easy to find except in those big books. I feel very lucky to find this book and the dgplug training.

by manankpatni at July 29, 2018 06:14 PM

Abdul Raheem (ABD)

Guest session by James Lopeman(meflin IRC nick)

So we had a guest session on 25-July-2018 by James Lopeman (Meflin IRC nick). So who is he? -He’s the CTO of his wireless ISP in Denver, Colorado USA, he has built on building ISP’S on 3 continents, and he’s a GSOC (google summer of code) admin for the python foundation and he has done Systers, Minnowbord, SyncDifferent, and The Linux Kernel Organization.

Some good suggestions that were given by him were

  • Always find something you love to learn and do.
  • Find good people to work.
  • You should always be ok with failing and many more, you can go through the logs over here.

If you are a beginner this was the few steps suggested by him that almost every beginner should follow to become a good system administrator:-

  • Always try something new
  • Don’t hesitate to experiment with something you like the most or wants to know about that particular thing 🙂
  • And the next one was to break your own computer and fix I guess he was saying about dismantling and fixing it 🙂
  • And try to install that on your own don’t take the help of an agent.

Kushal requested James Lopeman to give examples of things/services people can do/setup to learn some more amount of system? As most of the participants here are newbies.

  • One should set up networking not with the help of an agent but by his/her hands.
  • Web servers are easy fun you can do on your machine, learn to build a package for the distro you use (Which is very useful)
  • one should use most stem works as it helps to know how to write shell scripts they are used all over

Happy learning :).

by abdulraheemme at July 29, 2018 03:31 PM

Sehenaz Parvin

New milestone in humanity.

What is humanity according to India?
The answer is just a word ” hatred”. They just know how to hate people , kill people for no reasons.

What Indian people can do for their country?

  1. Molesting people
  2. Protesting against film industry
  3. Women destruction in every way possible
  4. Killing people for silly reason like cows, difference in religion,etc.
  5. And at last , a new thing that is added in the list recently – rape of animals irrespective of what they are.

Yes , now rape of animals. Sounds impossible Na? But no our Indian men can do anything . They have the ability to do everything. Remember the quote ” saare Jahan se accha, Hindustan hamara”? India is working full fledged on that to make our country developed both in internal and external networks. We are on the way to get an Oscar for the “best Country of the year” .

Today morning I got a news from Twitter and the heading was just unbelievable:

A pregnant goat dies after being gang raped by 8 men in Haryana.

I mean how ? Why? Till now we were frustrated with the rape victims of woman in the whole country and now this! What is left now to be seen? Who’s next ? A hen? A pig? A snake? Till now I knew that humans are interested in human molestation only but no ! Now along with females of the country we have to seek justice for our female animals too! Read this :

http://India just wow &amp; how ? A goat! Who's next? A hen, a pig? India Oscar is eagerly waiting for you.@narendramodi do you have anything to say here?
#humanityisdying
@RealtyMyths

#http://www.dnaindia.com/india/report-haryana-pregnant-goat-dies-after-being-gang-raped-by-8-men-2642880

And this was not enough. Read one more here. Today a dog which was only 7 months old was also raped by a man!

https://www.indiatoday.in/amp/mail-today/story/man-rapes-dog-to-death-section-377-unnatural-offences-ipc-1035577-2017-09-01?__twitter_impression=true

Now I have a question . Please tell me what feeling arouses by seeing a goat and a dog? No, I really wanted to know now what is that , that even biology failed to explain in men? Just because she belongs to feminine group and she has got vagina we would rape them ? This is humanity?

People of India who are giving great lectures on developing India must pay heed to these cases first. Something must be done to these sick and frustrated people. A country will be developed only after the people are developed.

Recently , India has been declared as the “most dangerous Country for women”. Soon it’s going to be declared as the ” most dangerous Country for feminine genders”

A new milestone is being set up by Indian men everyday. Humanity is in grave danger. If it is not controlled then we cannot imagine what will the next step in humanity.

Women are learning different sorts of self defence activities. What are the animals going to learn? They cannot speak even ! Are the men taking this advantage? Or we want to tell that animals admired them to do so !

“Man , when we are going to stop this ? ” It’s a grave question with no answer. Save humanity. Think now . It’s high time now.

I hope you will also have the same mentality after reading this. Tell me what you think about these habits of India of better Indian people.

P.S- I have dedicated these to those few men who are after the destruction of humanity . Not all men are the same. I want those men to speak up for these shameful activities too to restore humanity and peace in India again.

by potters6 at July 29, 2018 12:48 PM

July 28, 2018

Prajit Mukherjee(thegeekbong)

Python – The programming language of Coders, Not the Snake

Everyone has heard about Python, be it the snake or the programming language of coders. But, here I will only dedicate this post to the programming language. 😉 Python today, is one of the best programming languages because of it’s readability, ease of writing and also for notably using significant whitespace in the code which is also the design philosophy of the language.

Wikipedia says:

Python is interpreted high-level programming language for general purpose programming.

Easy peasy, right? Thought so.
The basic idea behind python is that it is a scripting language and we write scripts in a certain manner and then tell the python interpreter to interpret it and give us the output. That’s all.

Let’s dig into some history of this language, shall we?

History

Python was developed by Guido van Rossum

220px-Guido_van_Rossum_OSCON_2006Guido Van Rossum Credits: https://en.wikipedia.org/wiki/

Python was conceived in the late 1980s and early versions are released before the popular language Java in 1991. With its first release, python was not able to capture the market as Java did after it’s release in 1996. This must have been because of many reasons.

Soon enough came a revamped Python 2.0 on October 16, 2000, with features like cycle-detecting garbage collector, support of Unicode and many other features like list comprehension borrowed from Haskel. Python v1 went up to 1.6 before 2 was released.

 

220px-PythonProgLogoPython logo in the 1990s – 2005 Credits: https://en.wikipedia.org/wiki/

Python 2.0 was pretty good and extended till version Python 2.7 with features like backward compatibility, which means scripts written in Python 1 and other versions can be interpreted.

After Python 2, Python 3 was released on December 3, 2008. Python 3 is the Python as we know it today. With Python 3, the Python Software Foundation removed its backward compatibility feature and started fresh with new features and rectified several fundamental design flaws of the language. The main principle of Python 3 was to reduce feature duplication by removing older ways of doing things. Python 3 nonetheless was still a multi-paradigm language where the coder can choose among OOP, Structured programming, functional programming and many other.

Some major changes in Python were:

  • Changing print so that it is a built-in function, not a statement.
  • Renaming the raw_input(Python 2) function to just input. Thus, taking every input as a string datatype.
  • Changing the integer division functionality. In Python 2, if you do 3 / 2 the answer would have been 1But, with Python 3 onwards, if you type 3 / 2 it will give 1.5, and to only display the integer part you have to type 3 // 2, which will give 1.

These were some great features which were incorporated in the new Python 3.
Enough with the history lesson. Let’s jump to using python.

Using Python

Python is an easy to learn language. If you are a beginner and want hands-on experience with any programming language, then I highly recommend you begin with python.

For Windows user, you have to download the installer, depending on whether your PC it is 64-bit or 32-bit Run the executable file and don’t forget to select the option at the beginning screen to add python interpreter to your PATH. It is a very important step or you won’t be able to use python. Give it a while and after the installation is complete, open command prompt(Win+R -> Type cmd -> Enter). Type this command and hit enter.

python --version

This will show you the version of the python you have installed. if you see anything other than the version, like Python not found, for instance, then something has gone wrong.Either the install did not quite work or you forgot to add python to your path. Then you have to explicitly add the directory in your PATH variables.

For Linux and MacOS users, two different versions of python come inbuilt. For many users,

python --version

Gives the output as 2.7.x, as python invokes Python 2.7.x in your system, and

python3 --version

Gives the output as 3.6.x as python3 invokes 3.x.x.

You might want to change this, but I warn you not to do this since it can break your system permanently and you may have to install the OS again.

I work with LInux since it’s powerful, really handy and now in recent years, really easy to use.

Let’s begin.

We will be working with Python 3 and at the end of the blog, I’ll also give you links for those who want to study deeper about Python.

First, let’s start python:

python3

This will go in your terminals for Linux and macOS users, and for Windows users, you can do it in Command Prompt, but you will have to write just python.

The above command will show you somewhat like this:
python3-command

This tells us that the python command line interpreter has started and you can start writing code snippets.

Next, type in the commands as I have done below and you’ll see the output after you hit return after every command

 >>> 12+56
68
>>> 26-20
6
>>> -56+45
-11
>>> 25-50
-25
>>> 25/2
12.5
>>> #25//2 
... 
>>> 25//2 #Gives only integer part
12
>>> 5*2
10
>>> 5**2 #5 raise to the power 2
25
>>> var_1 = 100
>>> var_2 = 200
>>> var_1 + var_2
300
>>> print(var_1)
100
>>> print(var_1 + var_2)
300
>>> value = input('Enter the number to be squared: ')
Enter the number to be squared: 12
>>> value = value**2
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: unsupported operand type(s) for ** or pow(): 'str' and 'int'
>>> #Right way to input an integer
>>> value = int(input('Enter the number to be squared: ')) 
Enter the number to be squared: 12
>>> value = value**2
>>> print(value)
144

The error above, in bold is because in Python 3 we don’t need to declare any datatype hence, by default it takes input and stores the value as a string.  Just like raw_input in Python 2. Hence, I have to explicitly tell Python that I am the boss here, :P. Kidding. I have to specify that I want an integer by using the int( ) method

python-code

So that was the command line. You type a command. You hit return. You see results.

But, how do we execute multiple lines of code in one go?
There is an application called MU editor in which you can write python scripts and run also. Go to the downloads section and download the installer. It is available for both Windows and MacOS. Do refer to the instructions beside the download links if you need to.

For Linux users, you can download MU editor by using the below command on the terminal as explained by Kushal Das in his book for python for everyone.

python3 -m pip install -U mu-editor --user

It is around 150 MBs and may take some time to download and install. When it is downloaded then to run it you can by the command

python3 -m mu

When you want to execute the code you can simply click RUN on the editor.

Saving scripts is also simple in its Graphical User Interface(GUI), just click SAVE and give the script a name without any space, ex. my_script.py.
.py 
is the file extension to let the interpreter know it is a python script. Every python script has .py extension. So the next time you see a .py file, whoop de doo, you know it’s a Python script.

So as now you know to write basic python and also where to write the scripts. You can go on and read about python more.

Where to use Python?

python-clip

Python is the most versatile language. Using Python (and it’s broad ecosystem) alone, we can work on web development, data science, make new programs and software, games, search engines, etc. Some very famous python libraries and frameworks which you can use as a developer are listed below:

Flask:

flask-python

Flask is small, easy to use framework written in Python used for web development. It was released on January 1, 2010, and is developed by Armin Ronacher
It is regarded as a microframework because it does not try to do much, leaving you in control. Flask as in Django cannot interact with a database or any type of form validation.

Django:

django-logo.jpeg

Django is a free and opensource framework which follows the model-view template(MVT) architectural pattern. Django provides easy and fast ways to interact with a database. Python is used throughout the application even for simple forms and data models. Their own website states “It is the framework for perfectionists with deadlines.”
You can learn about the latest stable version, Django 2.0 in its own documentation as it is the best resource on the internet.

NumPy, SciPy, matplotlib:

scipy-numpy-python-training.jpg

These very famous libraries are used for Data analytics and in the field of Data Science. If you want to be a Data Scientist or Data Analyst, these three libraries are essential to your work.

NumPy adds the support for solving multi-dimensional arrays and matrices. NumPy also provides a huge number of operations to work on these arrays. NumPy was first released in 1995 as Numeric and then in 2006, it changed to NumPy. Please refer to the documentation for further information and how to use it.

SciPy is a free and open-source Python library used for scientific computing and technical computing. To try and know more about SciPy read their documentation.

Matplotlib is a Python 2D plotting library which produces quality graphs and figures in a variety of formats. Matplotlib can be used in Python scripts, the Python and IPython shells, the Jupyter notebook, web application servers, and four graphical user interface toolkits. The latest stable release of matplotlib 2.2.2 can be read about in its dedicated documentation.

Source of the above info: https://www.scipy.org/index.html

Pandas:

pandasPicture Courtesy: https://www.analyticsvidhya.com/blog/2018/03/pandas-on-ray-python-library-make-processing-faster/

Pandas is a software library written for the Python programming language for data manipulation and analysis. In particular, it offers data structures and data analysis tools for Python. It is open source and anybody can contribute to it.

It has the below salient features(compiled by Wikipedia):

  • DataFrame object for data manipulation with integrated indexing.
  • Tools for reading and writing data between in-memory data structures and different file formats.
  • Data alignment and integrated handling of missing data.
  • Reshaping and pivoting of data sets.
  • Label-based slicing, fancy indexing, and subsetting of large data sets.
  • Data structure column insertion and deletion.

The library is highly optimized for performance.

This is just a teaser, there are tons of libraries and frameworks available for Python. If you go on research on the language and dig deeper you’ll get to know its true abilities, and in the end, you’ll fall in love with this programming language. Python scales with you as your programming skills grow.

Some examples of Python in daily life: the biggest example is Google, the search engine, search engines are written in python, Political parties gathering votes or analyzing their own supporters is the work of Data Analytics and Machine Learning, again python is used here and at many more places. Hence, its crucial that as a coder we learn and study more about python.

Here are some resources to learn about Python:

There are many resources to learn about Python such as edX, Udemy and many more. choose what you like and do tell me in the comments section what you chose.

Conclusion:

For me, Python is one of the best programming languages that I have worked with. It has a huge number of uses and at present, it is hovering high above every language. Today for a programmer, knowing Python is a necessity. Hence, I strongly urge you to research and get familiar with Python.

Do share and follow for more and please leave your feedback in the comments, it really means a lot to me.

by thegeekbong at July 28, 2018 09:56 PM

Bhavesh Gupta (BhaveshSGupta)

The Man Who Stole My Laptop's Ferrari

So, the story goes back to 2014-2015 maybe I don’t remember exactly when my laptop stopped working on later finding it was discovered the issue was with my onboard graphics card. A brief history I had a Sony Vaio laptop with AND Radeon graphics card on it. So, it was found that it was my graphics card which is creating issue. So, I took it to many people to check what can be done this that.

July 28, 2018 07:54 PM

Abdul Raheem (ABD)

Synopsis for the previous classes

As always again I got busy with my college work and I have my exams in the very first week of August anyways here is my blog enjoy :).

So we had a guest session on 20-July-2018 which was taken by manishearth(IRC nick) on rust, He works on servo and rust at Mozilla and he got involved with open source for a while now. Wikipedia was the thing which got him involved with open source, I don’t know much about rust so am not going deep with it, you can have a look to that session here. And you can have the more information about rust here.

 

Session by Kushal on PYM book:

So after about 3 days, we had a session by Kushal da on PYM book, he gave us some homework from PYM book before we had a guest session by manishearth. The session started and everyone greeted and Kushal asked us if we had completed our homework or not and then he took some questions/doubts about the given homework.

f"{d} was a {d:%A}, we started the mailing list back then."

So as I said Kushal took some questions and I was having a doubt in the above line, my doubt was what does the %A do? So answer to my question was here which was provided by Kushal da.

And everybody else started asking their doubts/question they got in the given homework. The thing which I too got to know from other question was about boolean values(True/False).In python, every digit including negative values are true except zero and one more thing to remember in python that always try to say True and False, the first letter is always capital in True and False (boolean values only).

The values which are False in python are False, 0 {}, []. Then we had some discussion on operators which are basically the common mathematical operator like +, -, *, / and then we headed over to the logical operators ‘and’ and ‘or’.

So first we will get started with the operator ‘and’. So if both a and b are true, then the answer is true, it will display the b value if both values are true. If either of them is false, then the final value is false.

Then coming to the operator ‘or’ if either of the sides is true then it will return true. so basically if the left-hand side value is true then it does not check the right-hand side value. And as always we got some home-work.

 

 

by abdulraheemme at July 28, 2018 03:21 PM