Summer training students' planet

June 14, 2019

Kuntal Majumder (hellozee)

Done with boost

This would start with a quote: Documentation is like sex: when it is good, it is very, very good; and when it is bad, it is better than nothing.

by hellozee at disroot.org (hellozee) at June 14, 2019 06:46 PM

Bhavin Gandhi

Triaging bugs of GNU Emacs

I have been using Emacs for more than 6 months. mbuf and me were discussing about some way to package Emacs as container image so that we can run that image to test things, triage bugs for a particular version etc. Before getting started with that, he wanted me to have idea about existing bugs of Emacs and how to triage them, so that I get better idea of whole work flow.

by @_bhavin192 (Bhavin Gandhi) at June 14, 2019 12:23 PM

Jason Braganza

French, Week 5

Similar sounding words are an absolute murder on the ears.
I can’t figure out moo and moue, kip and keep, and lots of bonne and bun.
Keeping at it though. Keeping at it. :)

by Mario Jason Braganza at June 14, 2019 04:21 AM

June 10, 2019

Jason Braganza

French, Week 4

Coasting along this week, not stretching too much.
Just doing enough, so I do not fall off the wagon.
Having fun reading every word with r with the french ʁ sound :)


by Mario Jason Braganza at June 10, 2019 06:27 AM

June 09, 2019

Kuntal Majumder (hellozee)

Another Week with boost::graph

<prelude> In the previous post, I discussed boost::astar_search and ignored the most important part of the whole setup, the graph itself. This I would say is a little harder to comprehend than boost::astar_search.

June 09, 2019 07:59 AM

Another Week with boost::graph

<prelude> In the previous post, I discussed boost::astar_search and ignored the most important part of the whole setup, the graph itself. This I would say is a little harder to comprehend than boost::astar_search.

by hellozee at disroot.org (hellozee) at June 09, 2019 07:59 AM

May 31, 2019

Jason Braganza

French, Week 3

It’s going swimmingly well so far.
Still in the phase where there is an avalanche of stuff coming at me.
But a few tiny things, I know now.
Which is how I know I am making progress.

P.S. Crazy idea that just struck me, would to start writing these updates in french as soon as I am able!

by Mario Jason Braganza at May 31, 2019 06:32 AM

May 30, 2019

Jaydeep Borkar(jaydeep)

Building Chatbots using Dialogflow on Google Assistant for Beginners

In this tutorial, we will learn to build a chatbot (virtual assistant) using Dialogflow which will work on Google Assistant.

But what is Dialogflow in the first place?

It’s a tool by Google to build conversational chatbots.

STEPS

  • Go to the https://dialogflow.com/ — you will see a Dialogflow home page.
  • Sign in using your Google account.
  • After you sign in, click on ‘Go to console’.
  • You will be directed to the main console of Dialogflow. Now click on the drop-down icon next to the settings icon in the left-most column, and then click on create new agent.

Now, what is an agent over here?

Agent is an interface in Dialogflow which contains different sections called intents which further contain all the responses to the user’s queries. We will learn more about intents as we proceed ahead.

  • After you click on create new agent, give an agent name, default language, time zone,  and click on create. Keep ‘create a new Google project’ as it is in the Google project section. It will create a new Google project for Actions on Google and Google Cloud. We will be integrating Dialogflow with Actions on Google console which would help us to deploy it on Google Assistant.
  • Now, you can see your created agent in the console. We will be creating different intents now for our agent. You can create as many intents as you want. Two default intents — Fallback and Welcome are already created.

 

Let’s understand the Default Fallback and Welcome intent

Fallback intent comes into picture when the bot won’t understand the query triggered/asked by the user. It means there is no intent that matches with the query asked by the user. It will give a response like “sorry, I didn’t get it”. Talking practically — whenever you say something very unusual to Google Assistant, it will say “sorry, didn’t get you”. Here, after you said something strange that’s not a natural language, the Fallback intent got triggered and gave this response.

Default Welcome intent will greet the user once your bot is called. You can customize both the default intents with the responses that you want.

  • Now click on the + sign to create more intents. You can make as many intents as you want depending on your use-case. Let’s say you are building a chatbot for an ice cream parlor, and the user wants to know all the flavours that you serve. Here, you will create an intent let’s say ‘flavors’. Then you will add the training phases.

 

Training Phrases

  • Training phases are the queries that the user will ask to your chatbot. You can add as many similar training phrases as you want. These are the queries that you think user would most likely ask. For the flavour intent, you can add training phrases like — ‘What flavors do you have?’, ‘Show me the flavors’, ‘Flavors’, ‘I want to have a look at the flavors’, and the list is endless. These training phrases are used to train a natural language understanding model internally. This is the best part about

Dialogflow, you don’t have to write any code to train the model. In circumstances, where you want real-time results, you can write some Node.js code which we will talk later about!

 

Response section in the Intent

  • Now, in the Responses section, you can add the responses that you would like to show to the users. For instance, a simple response to a question in the training phrase — ‘Show me the flavors’ could be ‘Mango, Vanilla, Chocolate”. Make sure you append every response for every intent that you create with the sentence ‘would you like to know anything else?’, because for every intent except the exit intent (the intent that you would use when the user exits) it’s necessary that your response has a user prompt, else it would be against the guidelines published by Google and your app might not get published.

 

In a similar fashion, you can have as many intents as you like, as per your need.

Don’t forget to save every time you make an intent.

 

Making an exit intent

You also need to make an intent that would get triggered while the user leaves your app, which we would call as exit intent. For instance, if the user says, ‘bye’, you can have a separate intent for this with the training phrases related to ‘bye’, and in the response section, you can add anything that you like or something like ‘feel free to visit again’. Don’t forget to toggle on ‘set this intent as the end of the conversation’ at the bottom of this intent, as this intent will mark the end of the conversation with the user.

 

Using Node.js code

Now, if you want to give some real-time output to your users, then you can write some Node.js code for the same. Go to the Fulfillment section at the left of the console, there you will see an option for Inline editor, toggle it on to enable it. You can write your code in index.js part. Let’s say, you’re making an app for a restaurant. The restaurant’s working hours are from 10:00 am – 10:00 pm, now if someone wants to check at 11:00 pm if the restaurant is open or not, instead of giving the static response like “Our timings are from 10:00 am – 10:00 pm”, the code will take the real-time parameters and will tell if the restaurant is open at that specific time or not. You can use Node.js code for different use cases, this is just an example. After you’re done, click on deploy.

 

Integrations

You can have your chatbot integrated with different platforms like — Slack, Facebook Messenger, Alexa, Cortana, Twitter, Viber, Skype, Telegram, and many others. Make sure to toggle on the platforms that you like. We will be sticking to the Google Assistant, which is the default one.

History

In the History section on the console, you can check how the users are interacting with your app and what all are they saying. You can use this as a tool to check what specific questions is your app unable to answer so that you can work on them.

Analytics

In the Analytics section, you would see the analysis of how your app is performing.

Now it’s time to test the app. Let’s see how it works.  

On the Dialogflow console, at the right side, you would see “See how it works on Google Assistant”, click on that. Once you click it, you will see the test simulator. Let’s say that our app name is ‘test app’, then you will see the invocation phrase something along the line “Talk to my test app” on the simulator. You can test your app with all the training phrases/questions that you have used to train the model. You will get a better intuition about how your app will actually perform once it gets deployed on the assistant.  
Now, it’s time to deploy it.  

On the simulator page itself, you will see an overview section at the left. Click on that. In the Quick setup part, you can choose how your action should be invoked. In the ‘Get ready for deployment’ part, you can choose what countries would you like to provide your app over Assistant. By default, it’s all 215 countries. You can also choose what surfaces would you like to run your app over, it’s both phones and speakers by default.

Now, go to the ‘Deploy’ section, go to ‘Directory information’ — here you can add a short and long description for your app, sample invocations, background and profile image for your app, contact information, and privacy and consent. These all will be visible to your users. Feel free to click on ‘Need help creating a Privacy Policy?’ to know how to make a privacy policy, please follow all the steps over there. You can use free tools like this to make a privacy policy for your app. Then you can choose what category your app belongs to and some other related information about your app below it. After you’re done with all these, click SAVE at the top.

Afterward, you can add Surface Capabilities and Company details in the Deploy section itself. After that, go to the ‘release’ section within the deploy section itself — there you would see an option to submit for production in the production part. Click on that and your app will be submitted for the review by Google. If it meets all the guidelines, it will be soon deployed on Google Assistant within 24-48 hours (you will receive an email from Google once it gets deployed successfully; and if it doesn’t, you will get an email in this case as well with all the errors so that you can fix them and submit for the production again). You can also opt for the alpha and beta versions for your app.

Once it’s deployed, your app will be available to over 500+ million devices without any installation, isn’t that cool?

Now, you will get some perks as well. If your app gets a good user base and is in good standing, you would get Google Cloud credits worth $200 every month for 1 year and also an exclusive Google Assistant t-shirt from Google.

 

Feel free to post any doubts or suggestions!

 

by Jaydeep Borkar at May 30, 2019 09:48 PM

May 29, 2019

Mayank Singhal (storymode7)

Sending a mail using telnet?

Howdy! Ever sent an email? It’s super easy right?
Recently, I was setting up Discourse on an EC2 instance, and it required filling up SMTP details. I wasn’t able to get anything working the first few times though setting up Discourse is pretty amazingly easy. Eventually, everything was working except sending of emails.

And this is what I’ve learned while trying to make a mail reach the other side.

What is Discourse?

It is a discussion platform. A forum. With great UI and ease of set up.
Want to see a sample? Here’s Discourse’s own discussion site: Discourse. It is completely open source (here’s the source code). All you need is a domain and a server. Domain to give discourse a separate subdomain and server to run discourse.
They also offer free hosting to growing open source projects.

Well, so what’s in it for us?

Aha! While I was trying to find the source of timeouts that occurred when I was trying to send an email, I went down the road to see if the email hosting I being used (zoho.com) can send a mail using the credentials I was using.
Since the mail sending uses SMTP, and it requires 4 main things.

  • SMTP server address
  • A port for SMTP transmission (can be 25, 465, 587)
  • SMTP username
  • SMTP password

For Zoho, it’s pretty simple.
Port: It offers a port for TLS(587) connection and another port for SMTP(465) connection.
Username: zoho-username
Password: zoho-password

While on the server you can use discourse-doctor to send a mail to your own email address or to an email address provided by mail-tester.
It will send a test email to the said email ID. But one thing, that I couldn’t find another way to do was to change config quickly and retest.
If I changed the config, I had to do launcher rebuild app which would take 5-10 minutes. When I searched for SMTP related errors, the first thing that popped up everywhere was to test if your mail server was even up. But in my case, it was smtp.zoho.com so not being up was probably not the cause of error in my case. But anyways I checked it.

Checking if your SMTP server is up

A single line command.
telnet smtp.server.address smtp-port

telnet smtp.zoho.com 587

If the smtp server’s up, you’ll get connected and see something like this:

Trying 8.40.223.201...
Connected to smtp.zoho.com.
Escape character is '^]'.
220 mx.zohomail.com SMTP Server ready December 29, 2018 5:45:16 AM PST

Press Ctrl-] then Ctrl-d to exit.

So, the server’s up. Now, what next?

Let’s send an email using telnet

Authenticating yourself (Using telnet + mailjet)

In this case, I’m using an SMTP server by Mailjet, we’ll see Zoho in the next section.
(Don’t worry, mailjet’s really simple to use. It sends an email using a verified another email
Like, I was able to use SMTP to send mail using a Gmail email address)

NOTE: Telnet is NOT SAFE. Your credentials are passed in plain text. So make sure
you do this little experiment on a spare email and change password after you’re done testing.

If you’re also using mailjet, go to this URL to get your credentials and other details like server address(generally in-v3.mailjet.com) and port(587) etc.

telnet in-v3.mailjet.com 587

Output:

Trying 104.199.96.85...
Connected to in-v3.mailjet.com.
Escape character is '^]'.
220 in.mailjet.com ESMTP Mailjet

Now go ahead and say helo to the server like this:
ehlo your-username
You can also use helo in place of ehlo
The difference is ehlo provides information about the server’s capabilities.
Also the authentication mechanisms it allows.

helo a78syfhx087hs78hg0hx07y87wwkjew3

Here “a78syfhx087hs78hg0hx07y87wwkjew3” is the username provided by Mailjet.

Now go ahead and specify an authentication mechanism. Let’s go for AUTH LOGIN.
For this you’ve to write:

AUTH LOGIN

Now you’ll get a weird looking response.

334 VXNlcm5hbWU6

This method uses base64. So can you guess what the gibberish after 334 means?
Then decode this! Use the base64 command or any website to decode base64 data.

base64 -d
VXNlcm5hbWU6

Press Ctrl-d to exit after entering the base64 data and see the result.

And voila! It’s “Username:”
So, you can now proceed and enter your username, no?

Nopes. First, you need to encode it to base64 too.

If you’re going the CLI way, go for:

echo -n your-username | base64

“-n” ignores the trailing newline.
You can also use base64 directly, but to stay away from new line, you need to
press Ctrl-d just after the username is entered without pressing enter.

So for my random username here I’d next type the following in the telnet window.

YTc4c3lmaHgwODdoczc4aGcwaHgwN3k4N3d3a2pldw==

Now you’ll be asked for password. Again in base64.

334 UGFzc3dvcmQ6

That’s the base64 encoded username and now password the same way.
So for a password: “strong-gibberish” I’ll enter

c3Ryb25nLWdpYmJlcmlzaA==

Now you should get something like this:

235 2.7.0 Authentication successful

Authenticating yourself (Using openssl + zoho email)

Here we connect to zoho using the command:

openssl s_client -crlf smtp.zoho.com:465

s_client is the SSL/TLS client and “-crlf” is required to translate line feed to CR+LF. This will be required when you’ve finished entering data and you want to submit the final email (Submitting is done usually by <CRLF>.<CRLF> that is enter + . + enter. If you do not give this flag then enter won’t be interpreted as CRLF and you will be stuck in s_client limbo!

You’ll get your SSL connection details after you log in, and at the end we’ve this:

220 mx.zohomail.com SMTP Server ready May 28, 2019 9:57:41 AM PDT

Now we need to tell the domain which we are using on Zoho. It can be zoho.com or a custom domain that you’re using.
You can even specify your Zoho username here.

ehlo yourdomain.com

OR

ehlo username@yourdomain.com

are both fine. Now you’ve entered the auth mechanism (let’s keep it AUTH LOGIN) like in the telnet part. Enter the base64 encoded username (complete Zoho email id) and base64 encoded password (same as your login password)

I’ll specify again that you can base64 encode using:

echo -n "content to be encoded" | base64

After logging in successfully we can now finally send the email!

Sending email

Now we need to enter the details required by any email like sender, recipient, subject, email-body, etc.
Remember the commands are case insensitive, content is not.

To specify sender we use the mail command

mail from:<your-email-id>

Keep the angular brackets around your email id.

Then we specify the recipient

rcpt to:<recepient-email-id>

We’ve completed the basic setup, and now we need to tell what data will be there in the email.
The email data again consists of from and to fields. Here you can also specify a name (any name) under which you wish to send an email, given that the email id is still the same as provided in mail from.
Same goes for the to label.

data

data is the command name here. After you’ve supplied this, you need to see how the server detects the end of the message.
For us it is <CRLF> + . + <CRLF> that can be entered using keyboard sequence: <enter> + . + <enter>
This is followed on Mailjet as well as Zoho.
For example, Zoho’s response to data command is:

“354 Ok Send data ending with <CRLF>.<CRLF>”

from: your-name <your-email>
to: recepient's-name <receipient-email>
subject: subject details
content here is sent as email body

.

Now your email will be sent and you can quit by entering the command

quit

Finally, you can see the email that you sent all by yourself using SMTP!

Some final notes

  • If you specify one email in rcpt command in beginning say “mail1@mail1.com” and another in to field in the data command say “mail2@mail2.com”, then mail2 will appear as the recipient and mail1 will be added in BCC recipients.
  • You can find out what each reply code means, like 250 is for action taken and completed here
  • help for each command and their syntax can be found using help command-name
    example:
    help data
    
  • You can also use ncat if you wish. To connect to SSL port using ncat on Zoho you’d do:
    ncat --ssl --crlf smtp.zoho.com 465
  • Woshub’s article was the one that got me started. Do check it out!

 

So, did your mail reach the other side? 🙂
storymode7

 

PS: I drafted an initial version of this post way back in December, while I was editing it today to publish it I got to know something new. The time zone in the greetings from the server is different in winters and in summers! It’s PST(Pacific Standard Time) in winters and PDT(Pacific Daylight Time) in summers.

 

Advertisements

by storymode7 at May 29, 2019 03:16 PM

May 26, 2019

Rahul Jha (RJ722)

Announcing new blog series on Deep Learning

Background

As you might have noticed during the past few days, on the recommendation of my mentor and a dear friend, Jason Braganza, I have been trying to push beyond my boundaries and trying to get some rust off of this blog.

I also thought that this was a nice opportunity to get myself out of my comfort zone and write about different themes which I had always enjoyed reading but never really dread to write about, things like politics and philosophy.


There’s an xkcd for everything

Although I really enjoyed nearly every part of such writing: churning out research material, looking for counter statements, understanding the argument as a whole & putting it all together, etc., but I find that to put together a rather convincing post, it takes up a lot of time and effort, something which my upcoming schedule wouldn’t allow.

Instead, I’ve come up with a new scheme. I am currently revising the first few courses of the popular online specialization taught by Andrew Ng, deeplearning.ai and this blog serves as a great opportunity to digitize my notes and making them publicly available.

And the selfish reason behind me doing this “noble task” is best summarized by Andrew Trask, who is a DeepMind Researcher and a Ph.D. student at Oxford University. He’s also the author of the amazing book: Grokking Deep Learning. Here’s what he says:

The secret to getting into the deep learning community is high quality blogging. Read 5 different blog posts about the same subject and then try to synthesize your own view. Don’t just write something ok, either — take 3 or 4 full days on a post and try to make it as short and simple (yet complete) as possible. Re-write it multiple times. As you write and re-write and think about the topic you’re trying to teach, you will come to understand it. Furthermore, other people will come to understand it as well (and they’ll understand that you understand it, which helps you get a job). Most folks want someone to hold their hand through the process, but the best way to learn is to write a blogpost that holds someone else’s hand, step by step (with toy code examples!). Also, when you do code examples, don’t write some big object-oriented mess. Fewer lines the better. Make it a script. Only use numpy. Teach what each line does. Rinse and repeat. When you feel comfortable enough you’ll then be able to do this with recently published papers — which is when you really know you’re making progress!

Rachel Thomas, who is a Ph.D. in Math, a professor at USF Data Institute and is very well known as the cofounder of fast.ai, has the following to say for blogs :

It’s like a resume, only better. I know of a few people who have had blog posts lead to job offers!

Helps you learn. Organizing knowledge always helps me synthesize my own ideas. One of the tests of whether you understand something is whether you can explain it to someone else. A blog post is a great way to do that.

Now, those are excellent reasons for me to start blogging already, but I’ve an additional whip from our very own Mr. J. Braganza looming over my head waiting to crank down on me the moment I stop writing, so I rather prefer to continue!

Thanks again Jason!

P.S. For regular updates, please subscribe to my newsletter or to the RSS feed.

May 26, 2019 06:30 PM

Jagannathan Tiruvallur Eachambadi

Taking IM Back Using Prosody

The common way for most people to chat is using an app on their phone which typically one of Whatsapp, Messenger or Telegram. In the same vein we have apps like Threema and Signal which are more popular in certain circles. But the one consistent feature for all these services, is the centralization of accounts and forcing users to be trapped inside a silo. It is true whether the services or clients themselves are open source or not. In the case of Signal, the author doesn’t want to federate with people who want to run their own servers1. Given that the Freedom of the Press Foundation recommends Signal as a secure communications tool2, I have nothing against the security of the service and the client. It has audited by cryptographers and the underlying protocol for end-to-end encryption is used by other messaging services. Given this, you would still want to have control over the service and client for various reasons including not wanting to be dependent on centralized services.

XMPP

XMPP is an open standard for messaging. Notably GTalk and earlier iterations of Messenger supported XMPP and used to federate with other servers. So, it has a proven track record of actually working at scale but being closed off to grow their own proprietary services and lock users into such platforms. XMPP is generally extended by XEPs, XMPP Extension Protocols, that include features such as end-to-end encryption, file uploads, avatars etc. Even the definition of the process XEPs is XEP-0001 :)

To use XMPP, one can sign up for an account at one of the free services which need to be verified or you can run your own server.

Prosody

Prosody is an XMPP server that implements a lot of XEPs3 and further has community modules which can be used just as easily4. I will be using Debian 9 on the server and the project provides repositories for installing the latest stable release to make installation simple.

To install start by adding their repository to Debian’s sources list,

# /etc/apt/sources.list.d/prosody.list
deb https://packages.prosody.im/debian stretch main

add their key

wget https://prosody.im/files/prosody-debian-packages.key -O- | sudo apt-key add -

and finally install the package using apt,

sudo apt update && sudo apt install prosody

Configuring Prosody

The configuration of Prosody is a single Lua file located in /etc/prosody/prosody.cfg.lua. If you are already configuring this on server, the first course of action is to change the domain under which you want to run the server.

VirtualHost "myxmppserver.im"

Next is to specify the certificate used for SSL,

https_certificate = "/etc/prosody/certs/myxmppserver.im.crt"

I use Lets Encrypt to obtain certificates for free and if you already use for the server, it can be linked to the current location.

sudo prosodyctl --root cert import /etc/letsencrypt/live

At this point you can start the server using systemctl,

sudo systemctl start prosody

By default it will log under /var/logs/prosody/, so be sure to check there in case there are issues. To test the installation, you need to add an account. By default, the configuration prevents creating accounts by using clients as a safety feature but if you are planning to run a public server this can changed later.

sudo prosodyctl adduser test@myxmppserver.im

Test the account using a Jabber client. There are many programs but I find the android app Conversations, to be fully featured and a pleasure to use. If all is well, we can move on to enabling some nice to have features in Prosody.

In the modules_enabled table, uncomment mam and csi_simple. mam allows one to access the archive and this allows Conversations to pull down the history when you were offline. For using some community modules, it was recommended by linkmauve to have a clone of the modules repository and link the required modules into modules library under prosody.

hg clone https://hg.prosody.im/prosody-modules/ prosody-modules
sudo ln -s ~/prosody-modules/mod_smacks/mod_smacks.lua /usr/lib/prosody/modules

You can link these modules as required. To enable them, add the module to the modules_enabled table and restart the server. You can check for compliance of your server on https://compliance.conversations.im/ which can be useful to see if you want to enable other modules according to your requirements. Please remember to create a separate account to run these tests by using prosodyctl since you will have to provide passwords to the compliance site.

In conclusion, one can run their own XMPP server or use a trusted server, for example, https://jabberfr.org/ and chat with anyone on XMPP whether on your server or on their own islands. You can meet me at xmpp:jagan@j605.tk, See ya :)

by Jagannathan Tiruvallur Eachambadi (jagannathante@gmail.com) at May 26, 2019 09:09 AM

May 25, 2019

Kuntal Majumder (hellozee)

Dissecting boost::astar_search

Right now, I am having a hard time understanding BGL’s (the Boost Graph Library) template spaghetti, so decided to write a blogpost while I decipher it, one at a time, documenting the whole thing along the way.

May 25, 2019 11:13 PM

Dissecting boost::astar_search

Right now, I am having a hard time understanding BGL’s (the Boost Graph Library) template spaghetti, so decided to write a blogpost while I decipher it, one at a time, documenting the whole thing along the way.

by hellozee at disroot.org (hellozee) at May 25, 2019 11:13 PM

Rahul Jha (RJ722)

Silk Road, Revolutions and Systems

Today, I read the story of Silk Road: how the young idealist Ross Ulbricht, tired of chasing success the old school way, found his way around the darkweb to create an online As a part of the darkweb, it was operated as a Tor hidden service which protected the personal privacy of users by concealing their details from anyone - from the Government to their ISP - conducting network surveillance. Additionally, all payments were made using Bitcoin , a cryptocurrency which provides a certain degree of anonymity. bazaar for the trading of illicit materials, mainly drugs, which he named Silk Road.

The aim behind writing this blog post is to think out loud and try to gain insight into the oversights made by some of the most prominent revolutionaries in history.

When operating his online empire, Ross would take on the identity of Dread Pirate Roberts (~DPR) (borrowing the name from “The Princess Bride”, in which the pirate was a mythical character, inhabited by the wearer of the mask).

Ross (aka DPR) was having trouble switching back-and-forth between these different personalities, the many different facets of which were penned down beautifully in the original article :

To Alex, Ross was the cool new roommate; to Julia [his on-and-off girlfriend], a passionate lover and inspiration; to his family, the perpetual Eagle Scout; to Force [undercover DEA Agent posing as a Puerto Rican cartel middlemen], an unlikely friend in the night; to Tarbell [FBI Agent investigating his case], a smart kid defeated by his own arrogance. To the Southern District of New York US attorney’s office, Ross was simply the criminal conspirator Dread Pirate Roberts.

The likeliest reality is that Ross was all of those things. The open-minded seeker who conscientiously tried to pluck trash from a tree was Ross. As was the feverish visionary creating a virtual empire at any cost. Neither truth invalidated the other. Ross and DPR can (and did) coexist.

Ross didn’t exactly dream of building this huge empire of illicit business, but it was essentially all baby steps, As B.J. Neblett said “We are the sum total of our experiences. Those experiences – be they positive or negative – make us the person we are, at any given point in our lives. And, like a flowing river, those same experiences, and those yet to come, continue to influence and reshape the person we are, and the person we become. None of us are the same as we were yesterday, nor will be tomorrow.” stemming from the influence Ludwig von Mises - an Austrian economist described in the story as “a totem of the modern American libertarian orthodoxy” - had on Ross. According to von Mises, a citizen must have economic freedom to be politically and morally free.

If you haven’t read the story yet, please do and then come back! It might easily be one of the most riveting cyber-criminology reports you ever read.

Joseph Stalin, Adolf Hitler and Ludwig von Mises, they all had an ideology - a vision of the ideal world, and a way of bringing peace to world. For them, it embodied an expression which society must adhere to lead them towards utter completeness and happiness.

The ideology of Hitler was an ideology of conquest: the “manifest destiny” of a superior race to conquer, occupy, and control lands of the “lesser” people - the Untermenschen - for the sole benefit of the superior race.

The ideal society for Stalin was one in which people contribute to it because they feel it is their pleasure and responsibility to do so, and in which people only consume what they need while being mindful of the needs of others.

And they executed their ideas, bringing about their ‘revolution’!

Both regimes - the Third Reich and Stalinism - were responsible for millions of deaths and untold amounts of suffering.

Although one can argue that Ross Ulbricht’s ‘revolution’ was nowhere near that scale, but that is immaterial to our discussion. It followed the same pattern which was summed up by Bearman rather well in the original story:

It’s an age-old story, the bloom and wilt of revolution. After tearing down the establishment’s walls, the new regime soon realizes the rubble would make a fine set of gallows. Just as Tarbell thought, all systems are the same. At the beginning of Silk Road, what Ross created was just a system. Then, at a certain point, it became his system—at which moment the system was doomed.

Isn’t it strange - How we become the very thing we fight against!


Gazing into Abyss

“Beware that, when fighting monsters, you yourself do not become a monster… for when you gaze long into the abyss. The abyss gazes also into you.”

― Friedrich W. Nietzsche

For people who prefer examples in fiction over history, what happened with the finale of Game of Thrones is a prime example of this pattern. [Spoilers Ahead] In light of everything Daenerys [one of show’s main protagonists] accomplished — birthing dragons out of stone, freeing thousands of slaves, helping the Starks defeat an army of ice-zombies — the viewers first handedly experienced the mindset of a revolutionary who believed that it was incumbent upon her to liberate the entire world. Yet it was when she failed to draw a line between herself and her vision The time when she succumbed to her temptations, burning alive and hence killing thousands of people in King’s Landing that she failed as a ruler, becoming exactly what she had hoped to abolish: tyranny. It is worth noting here that all the while she unapologetically burnt the innocent, she was fueled by the exact same idea - to liberate the innocents of the world from tyranny.

In Ross’s case, the fact that he was feeling uneasy even as DPR (who was a rather confident and eloquent character); that he had already begun failing at what he had intended to do was the first clue that the shadows of doom had already fallen upon him. But he deceived himself in the name of his idea - in the belief that he was doing the right thing.

Is it really this belief and total devotion to our idea that blinds us, or is it the power and the riches which corrupts us? Or maybe it is a fundamental misunderstanding of our very own conceived idea? Perhaps it might be a skewed combination of all of the above.

I do not claim to know the answers to any of the above questions.

But let’s look at another revolutionary: Mahatma Gandhi and the revolution he brought about in India.


Mahatma Gandhi was called Bapu (Father) by many, including Jawaharlal Nehru

In response to the Rowlatt Act imposed by the British and the Jallianwala Bagh Massacre, Gandhi lead the non-cooperation movement, appealing to the masses to adopt swadeshi goods and local handicrafts to boycott British goods. The movement was based on the principle of Ahimsa (Non-Violence), and after two years of hard work, it gained full momentum in 1922. It seemed that the dream of Swaraj (self-governance) was finally turning to reality.

But giving a face to Gandhi’s fears, cases of violence were reported from all over the nation, and after the Chauri-Chaura A large group of protesters participating in the Non-cooperation movement clashed with the police, who opened fire. In retaliation, the demonstrators attacked and set fire to a police station, killing all of its occupants. incident, he decided to call off the protest indefinitely. This was indeed a very difficult and brave decision on his part - he could have gotten what he wanted and ignored the ‘milder’ cases of violence for the nation, but his moral caliber was defiant of such behavior and he chose to voice it.

It was perhaps because he believed in a perpetual fight - a fight we all have to fight against our own moral demons - in which the nation must not succumb to the demon of violence.

Subsequently, Gandhi launched many campaigns perfecting the concept of Satyagrah सत्याग्रह (Satyagrah): सत्य (Truth) + आग्रह (insistence) - सत्य के लिए आग्रह - The truth force, is a particular form of non-violent civil resistance , finally leading the nation to independence in 1947.

I guess we all get to play Ross sometimes, and I believe that creating barriers, as Gandhi did, to encourage the higher moral stance of one’s own values - be it by ruthless questioning of one’s own beliefs and biases, or having an external support mechanism for keeping oneself on track - would help us dodge the doom of our system.

But I do find it very interesting to ponder upon how ‘easy’ it is to be lost, to be engrossed so deeply into our visions to forget what it stood for in the first place; to cross the rather fine line drawn between us and the monster, and how tools like identities and the different masks we wear make it all the more easier.

Special thanks to Abhipsha for proofreading and making this article readable!

May 25, 2019 06:30 PM

May 24, 2019

Jason Braganza

French, Week 2

This was a week of learning to work at it.
Lots of fun.
Managed to keep up with daily lessons in the Fluent Forever app. Am having trouble with the guttural ʁ sound.
Hopefully will get better at it with practice.

by Mario Jason Braganza at May 24, 2019 03:11 AM

May 23, 2019

Bhavesh Gupta (BhaveshSGupta)

Birthday Week 2019

So another Birthday gone as-usual. Although I am not a big celebrator expect last where we had a trip to Rishikesh on my birthday PS: I am yet to write that post. My celebration to birthday most part has only dinner with family. We go to a fancy hotel have some food and come back. Its almost like this has become a tradition to celebrate birthday this way. This post was to be published on 19th May, instead I am writing now because of my laziness I don’t often write due to fear of writing or fear of making mistakes while write.

May 23, 2019 07:30 PM

May 21, 2019

Rahul Jha (RJ722)

Freedom of Speech, Authoritarianism, Freedom of Press and Faiz

Right to Free Speech is essential for a democracy. This blog post aims to shed some light on the recent authoritarian attempts made by hindutva-right-wing to curb free speech and how can we fight back.

“India’s Divider in chief”

TIME magazine, in it’s May 20 edition, featured the Prime Minister of India, Mr. Narendra Damodardas Modi on it’s cover page


The photo, which appeared rather grim, was tagged India's Divider in Chief

The article opens with the sentence:

“Of the great democracies to fall to populism, India was the first.”

Under Prime Minister Modi, the story read,

“Nation’s most basic norms, such as the character of the Indian state, its founding fathers, the place of minorities and its institutions, from universities to corporate houses to the media, were shown to be severely distrusted.”

Furthering the argument, it says:

“…Under Modi, minorities of every stripe – from liberals and lower castes to Muslims and Christians – have come under assault”

It talked about the promise of the economic reform of the messiah which has failed to materialize and how “he [the BJP] is lucky to be blessed with so weak an opposition–a ragtag coalition of parties, led by the Congress, with no agenda other than to defeat him

But, we aren’t here to debate this. Let’s look at the before and aftermath of the incident.

2012


Manmohan Singh featured on one of the 2012 editions of the same magazine

In 2012, Manmohan Singh appeared on the cover of the same magazine as “The Underachiever”, even which might have been an understatement at the time.

Many of BJP supporters including but not limited to their leader Ravi Shankar Prasad, without a shred of doubt on the authenticity of the article, straight away demanded his resignation on the grounds that the image of India has been spoiled.


Narendra Modi appears for the first time on the cover page of TIME, highlighted positively - Modi means business

A subsequent edition of TIME in 2012 again showed interest in Indian economy featuring Modi on cover page entitled “Modi means business” - enforcing the dream BJP instilled in Indians of the economic reform it promised.

2015


Modi, now Prime Minister, appeared once again on the cover page of TIME. This time, it said Why Modi Matters

And again in 2015, Modi was featured, this time as the Prime Minister. The tag line said “Why Modi Matters” depicting Modi government positively.

Apart from this, he was also rewarded a place in the list of “The 100 Most Influential People” by (yes, you guessed it right) TIME in 2014, 2015 and 2017 editions.

Up until now, all in favour, all good - Lo and behold, TIME magazine is the best magazine in the world.

2019

The trend line of TIME on the opinion on Narendra Modi is very similar to the change of opinion of people, slowly declining until 2017 and a much steeper fall after that. The cover page of 2019 is a simple depiction of that, but now the complaints begin.

Controversy behind Pakistani Writer

The author of article, Mr. Aatish Taseer is a Pakistani Journalist. Following the release of the magazine, his wikipedia page was severly vandalized - stating that he writes against Brahmins, is a member of Lashkar-E-Tayabba, a pakistani militant group and that he is also working as a PR manager for Congress.

Wikipedia page of Aatish vandalized

This false information was then tweeted numerous times as widespread propaganda by some trolls on the internet. Other means to defame him were also adopted.


One of many fake TIME cover created by trolls for defaming Aatish

One thing worth noting here is that yes, the author actually is half-pakistani (born to an Indian mother, Mrs. Tavleen Singh and a Pakistani father, Mr. Salmaan Taseer). In fact, Mr. Salmaan was one of the most liberal politicians of Pakistan, leading to his assassination in 2011 because of his strong opinions on Blasphemy Laws in Pakistan and his mother, Mrs. Tavleen appeers to be a Modi supporter.

This was the journey of the TIME magazine - from being used as a source for asking resignation of a Prime Minster in 2012 to being stamped as an anti-nationalist, anti-hindu, pro-Pakistani and pro-Congress magazine - which cannot be trusted at all.

The Rise of authoritarianism

For the past few years, there has been a shift in governance model and their policy and stance on failures - furthering more and more towards authoritarianism. Anyone who dare questions them or speaks against them is drafted ‘anti-national’ and crazy as it might sound - ‘anti-hindu’. First comes hate speech, death threats and trolling for propaganda. Even then, if someone doesn’t stop, then, if they are lucky, they are charged with sedition Arundhati Roy, Kanhaiya Kumar, Umar Khalid, Aseem Trivedi are just a few famous names who have been charged with sedition, primarily because they spoke against the ruling party. It’s ironic that this draconian law was used by the British to suppress the freedom movement. , or else they are either dealt with violence - lynched by a mob , assassinated or forced to commit suicide (which the government officials would then put up a huge mournful act to).

The cases for all these victims was made further down by defaming them across mainstream media portraying them as criminals or associating them with terrorist organizations, which brings me to my second point…

Freedom of The Press

It isn’t just the voice of the student leaders and activists which is silenced, but even the mainstream media journalists are being denied their right to free speech.

Gauri Lankesh, one of the top political journalists of India was shot dead outside her house because she was an outspoken critic of right-wing-‘hindutva’-politics and was present at the forefront of many protests, including the protest against the smearing of Kannada writer Yogesh Master’s face with black ink.

When Punya Prasun Bajpayi, in his show Masterstroke uncovered some false claims made by Prime Minister regarding a rural lady by interviewing her, TV screens were blackened out for the consequent episodes for his show in many parts of the nation. This political pressure on ABP news further lead to resignation of ABP’s network managing editor, Milind Khandekar, closely followed by Bajpayi’s own. After this round of resignations, another journalist at the network Abhisar Sharma went for leave and finally resigned a few days later.

This is an organized attack on media, disrupting any dialogue or questions in the matter of starvation deaths, unemployment, education affairs, farmer suicides, clean air and water.

The scripted interviews which our Prime Minister gives, all the while bluntly blurting out lies, gibberish and factually incorrect statements, would be comprised of questions aimed at reinforcing the propaganda amongst the masses.


This is a mainstream media house - The job of these 'journalists' has crumbled to the extent that they now spend time on Kim Jong Un's wives rather than questioning or analyzing government policies, creating awareness or showing statistics about the current unemployment in India.


India currently ranks 140 out of 180 countries on World Press Freedom Index, which is a disaster for the world’s largest democracy.

What can we do?

In the words of Ravish Kumar (one of the handful journalists Himmat, a magazine, edited by Rajmohan Gandhi, which maintained independence despite State repression [when Emergency was imposed during 1975] serves as a means of great inspiration for today’s journalists, to find ways to resist corporate control and to tell readers the truth. Read complete story here who haven’t yet forgotten what journalism is and and still have the courage to raise the right questions):

Ask questions. Questioning government is the highest service to the nation.

Faiz Ahmad Faiz, one of the most celebrated poets in Urdu literature, who was also a protagonist of the Progressive Movement of India (1936), wrote a nazm: “Bol ke Lab Aazad Hain Tere” When Safdar Hashmi, who later became a symbol of cultural resistance against authoritarianism for the Indian left, was murdered while performing his street-play ‘Halla Bol’ (Attack), Faiz’s nazm served a rallying cry for the protestors with each line followed by chanting ‘Halla Bol’. (English: Speak, for your lips are free), possibly in the wake of Kashmir Liberation Movement dedicated to his friend and renowned music composer Arshad Mahmud, who was also his student and compatriot.

This nazm couldn’t be any more relevant today. Have a read for yourself (English translation below):


بول کہ لب آزاد ہیں تیرے

بول زباں اب تک تیری ہے

تیرا ستواں جسم ہے تیرا

بول کہ جاں اب تک تیری ہے

دیکھ کہ آہن گر کی دکاں میں

تند ہیں شعلے سرخ ہے آہن

کھلنے لگے قفلوں کے دہانے

پھیلا ہر اک زنجیر کا دامن

بول یہ تھوڑا وقت بہت ہے

جسم و زباں کی موت سے پہلے

بول کہ سچ زندہ ہے اب تک

بول جو کچھ کہنا ہے کہہ لے

English:

bol ki lab āzād haiñ tere

Speak, for your lips are free

bol zabāñ ab tak terī hai

Speak, your tounge is still your own

terā sutvāñ jism hai terā

That this frail body is still yours

bol ki jaañ ab tak terī hai

Speak, your life is still your own

dekh ki āhan-gar kī dukāñ meñ

See how in the blacksmith’s forge

tund haiñ sho.ale surḳh hai aahan

Flames leap high and steel glows red

khulne lage qufloñ ke dahāne

Padlocks opening wide their jaws

phailā har ik zanjīr kā dāman

Every chain’s embrace outspread!

bol ye thoḌā vaqt bahut hai

Time enough is this brief hour

jism o zabāñ kī maut se pahle

Until body and tounge lie dead

bol ki sach zinda hai ab tak

Speak, for the truth is living yet

bol jo kuchh kahnā hai kah le

Speak whatever must be said!

It is rather pressing that we give a form factor to voice of Faiz and Ravish. I appeal that we, the citizens of India speak out, question the government, and spread awareness amongst our fellows citizens our rights.

May 21, 2019 06:30 PM

May 18, 2019

Rahul Jha (RJ722)

A glimpse into the darkness: the 'Brutish' rule in India

India - the golden bird of medieval times, known for it’s riches - the diamonds and the muslins, one of the world’s greatest exporter of silk - a country sharing a cut of more than 27% in the world’s economy during the sixteenth century - the country which was then capitalized for 200 years all the while feeding to the interests of Britain, leaving the post-British India with a crumbling share of a little more than 3% in the world GDP.

This is my country.

I am an Indian. This blog post highlights the pain it causes me that we, the youth of India - the second-generation freeborn aren’t afflicted by this dark side of the history, and how our education [A British establishment] merely portrays colonization as a chronological series, celebrating the independence and mapping it with subsequent post-independence failures. There is none or timid attempt made to lay emphasis on the curtailed legacy of India, the utter amorality of the British rule or the atrocities imposed on our forefathers without scruple or principle. This case of insincerity has lead to an incomplete analysis of the deep wounds of the colonization and of finding a cure.

On the contrary, the notions about how the British brought industrialization to India, how trains were supposedly a precious ‘gift’, and how British were key to the political ‘unification’ of India are quite popular.

But recently coming across an Oxford Union Debate by Dr. Shashi Tharoor, a novelist, diplomat and Indian politician, on the proposition “Britain Owes Reparations to her Former Colonies” - which he admissibly won with his characteristically impassioned and precisely argued speech - was an eye-opening experience, which lead me to further pursue the topic.

The Debate


After the debate, Tharoor left England (in his own words, “pleased enough, but without giving the proceedings a second thought”). However, a couple of months later, once the speech was posted online, it took on an almost surreal afterlife, not only going viral across various social media platforms and causing many a storm in chai cups across the sub-continent and Britain, but also managed to unite, in India, the old and the young, the radical and the conservative, and most uniquely, the ever-estranged political left, right, and centre of our country in unequivocal approbation.

On this, says Tharoor

“Yet the fact that my speech struck such a chord with so many listeners suggested that what I considered basic was unfamiliar to many, perhaps most, educated Indians. They reacted as if I had opened their eyes, instead of merely reiterating what they had already known.

It was this realisation that prompted my friend and publisher, David Davidar, to insist I convert my speech into a short book – something that could be read and digested by a layman but also be a valuable source of reference to students and others looking for the basic facts about India’s experience with British colonialism. The moral urgency of explaining to today’s Indians – and Britons – why colonialism was the horror it turned out to be could not be put aside.”

The Book

He indeed did gift India with his book “An Era of Darkness” deconstructing the british rule, unfolding around various themes: Of loot and of the hemorrhaged Indian wealth, of the increased rural poverty, the nefarious British policies (like divide-and-rule) which continue to haunt the contemporary India to date, the famines and the holocausts, and of course Cricket.

The book provides as a leaping point, marking a paradigm shift forward so that the Youth of India knows the importance of the past and of talking about it, if only to unpick its skein better – but to do it yet, with a sense of irony and wisdom.

TLDR;

An Era of Darkness, by Shashi Tharoor is a must read!

May 18, 2019 12:00 AM

May 17, 2019

Rahul Jha (RJ722)

A glimpse into the darkness: the 'Brutish' rule in India

A second-generation freeborn attempts to understand the impact and aftermath of colonization of India by British. It turns out that even an educated Indian of today is still not aware of the atrocities and turmoil it caused the country.

India - the golden bird of medieval times, known for it’s riches - the diamonds and the muslins, one of the world’s greatest exporter of silk - a country sharing a cut of more than 27% in the world’s economy during the sixteenth century - the country which was then capitalized for 200 years all the while feeding to the interests of Britain, leaving the post-British India with a crumbling share of a little more than 3% in the world GDP.

This is my country.

I am an Indian. This blog post highlights the pain it causes me that we, the youth of India - the second-generation freeborn aren’t afflicted by this dark side of the history, and how our education [A British establishment] merely portrays colonization as a chronological series, celebrating the independence and mapping it with subsequent post-independence failures. There is none or timid attempt made to lay emphasis on the curtailed legacy of India, the utter amorality of the British rule or the atrocities imposed on our forefathers without scruple or principle. This case of insincerity has lead to an incomplete analysis of the deep wounds of the colonization and of finding a cure.

On the contrary, the notions about how the British brought industrialization to India, how trains were supposedly a precious ‘gift’, and how British were key to the political ‘unification’ of India are quite popular.

But recently coming across an Oxford Union Debate by Dr. Shashi Tharoor, a novelist, diplomat and Indian politician, on the proposition “Britain Owes Reparations to her Former Colonies” - which he admissibly won with his characteristically impassioned and precisely argued speech - was an eye-opening experience, which lead me to further pursue the topic.

The Debate



After the debate, Tharoor left England (in his own words, “pleased enough, but without giving the proceedings a second thought”). However, a couple of months later, once the speech was posted online, it took on an almost surreal afterlife, not only going viral across various social media platforms and causing many a storm in chai cups across the sub-continent and Britain, but also managed to unite, in India, the old and the young, the radical and the conservative, and most uniquely, the ever-estranged political left, right, and centre of our country in unequivocal approbation.

On this, says Tharoor

“Yet the fact that my speech struck such a chord with so many listeners suggested that what I considered basic was unfamiliar to many, perhaps most, educated Indians. They reacted as if I had opened their eyes, instead of merely reiterating what they had already known.

It was this realisation that prompted my friend and publisher, David Davidar, to insist I convert my speech into a short book – something that could be read and digested by a layman but also be a valuable source of reference to students and others looking for the basic facts about India’s experience with British colonialism. The moral urgency of explaining to today’s Indians – and Britons – why colonialism was the horror it turned out to be could not be put aside.”

The Book

He indeed did gift India with his book “An Era of Darkness” deconstructing the british rule, unfolding around various themes: Of loot and of the hemorrhaged Indian wealth, of the increased rural poverty, the nefarious British policies (like divide-and-rule) which continue to haunt the contemporary India to date, the famines and the holocausts, and of course Cricket.

The book provides as a leaping point, marking a paradigm shift forward so that the Youth of India knows the importance of the past and of talking about it, if only to unpick its skein better – but to do it yet, with a sense of irony and wisdom.

TLDR;

An Era of Darkness, by Shashi Tharoor is a must read!

May 17, 2019 06:30 PM

Kuntal Majumder (hellozee)

This is the year of Linux Desktop

The title is a running joke now, please don’t hit me, I know that this is one hell of an aloof conclusion. But why? Let’s retrospect the situation. Linux is just the kernel Add a display server on top of it Add a window manager Add a compositor Add a display manager You will get a desktop, yes a desktop and not a desktop environment.

May 17, 2019 02:16 PM

This is the year of Linux Desktop

The title is a running joke now, please don’t hit me, I know that this is one hell of an aloof conclusion. But why? Let’s retrospect the situation. Linux is just the kernel Add a display server on top of it Add a window manager Add a compositor Add a display manager You will get a desktop, yes a desktop and not a desktop environment.

by hellozee at disroot.org (hellozee) at May 17, 2019 02:16 PM

May 12, 2019

Prashant Sharma (gutsytechster)

Python testing with pytest

Testing plays a crucial role in software development. Testing each part of code as you write it, is considered a good habit. Passing tests builds confidence that you haven’t accidentally broke the already working code.

Testing in python can be done using variety of modules one of which is pytest itself. pytest provides a few advantages over standard python testing module unittest

  • pytest provides the error highlighting as well as give the reference to the code snippet that causes the test failures.
  • pytest allows us to write tests with minimum or no boilerplate. It allows test cases to be written in a compact manner.
  • pytest provides the notion of fixture that helps to perform certain action before and after the test code without any boilerplate to be used and with no code duplication.
  • pytest can run the tests which are defined as per unittest style as well.
  • pytest provides a wide variety of community plugins that increases its flexibility.

Let’s get start with it

How to use pytest?

pytest is a third party framework. So to use it we’d need to install it via pip as

pip install pytest

Creating a test suite in pytest is as easy as defining a module with a couple of functions. Suppose you have a python file having some code written as

#example.py

def add(number1, number2):
    return number1 + number2

Now we wish to write a test for it using pytest. It would look something like this

#tests/test_example.py

from example import add

def test_add():
    arg1 = 5
    arg2 = 3
    result = add(arg1, arg2)
    assert result == 8
    assert add(3, 2) == 10

And it’s done, we didn’t even need any pytest functionality to write this test. Our test code assert the expected value with the actual value returned by function. We also try to attempt a wrong assertion to check how pytest behaves on it.
Now to run this test, you just need to type pytest or py.test in the terminal. pytest is smart enough to find test files present in your current directory. Any file whose name follows the test_*.py or *_test.py pattern is discoverable by pytest even within multi-level directory structure. So let’s run the tests

collected 1 item                                                                                                     

tests/test_example.py F                                                                                         [100%]

====================================================== FAILURES =======================================================
______________________________________________________ test_add _______________________________________________________

    def test_add():
        arg1 = 5
        arg2 = 3
        result = add(arg1, arg2)
        assert result == 8
>       assert add(3, 2) == 10
E       assert 5 == 10
E        +  where 5 = add(3, 2)

tests/test_example.py:9: AssertionError
========================================= 1 failed in 0.05 seconds ====================================================

You may see that pytest provides the detailed failure of the test. This allows us to write compact test without losing introspection information which seems to be a great advantage. To see more examples of pytest failure reports, you may refer here.

pytest fixtures

I’d say the most amazing feature of pytest is the notion of using fixtures. Fixtures in pytest provides a fixed baseline upon which test can be executed reliably with multiple times. It provides a better and clean approach over the traditional setUp/tearDown functions. You know what? A fixture can itself use another fixture as well.

Defining a fixture in pytest is as simple as decorating a function with @pytest.fixture decorator. Let’s see an example

# tests/test_example.py

import pytest
from example import add

@pytest.fixture
def base_data():
    arg1 = 5
    arg2 = 3
    return (arg1, arg2)

That’s it. You can see how easy it is to define a fixture in pytest.

We can use the fixtures in three different ways

  • By passing the fixture as a parameter to the test function definition. pytest will automatically discover the fixture(if it is available) when you use it within the test function.
    # tests/test_example.py
    ...
    
    def test_add(base_data):
        number1, number2 = base_data        #tuple unpacking
        assert add(number1, number2) == 8
  • We can also use @pytest.mark.usefixtures() as a decorator to the test function or class. It includes all the fixtures that need to be used within the function or class. It is most useful when converting the unittest classes to use pytest fixtures. The above test function can also be written as
    @pytest.mark.usefixtures("base_data")
    def test():
        number1, number2 = base_data 
        assert add(number1, number2) == 8
  • Another way of using a fixture would be to use the autouse parameter by setting its value to True in the fixture definition. By default it is False. But if you set it to True all the tests will use that fixture automatically, we need not to explicitly define those fixtures. But with great power comes great responsibility, you need to be sure that it doesn’t lead to any unwanted results.

A fixture need not to return anything as it just defines some code that we want to run before our tests. Though if you want to use any data that is being defined within the fixture then you ought to return that so that it can be used within the test function. I’ve returned a tuple of arguments and then unpacked it to get the values.

scope parameter

pytest fixtures defines a scope parameter that decides how many times a fixtures needs to be executed. In the fixture definition, we can set the value of scope to one of the following

  1. functionIt runs the fixture once per test function.
  2. class: It runs the fixture once per test classes.
  3. module: It runs the fixture once per module.
  4. session: It runs the fixture once per session.

By default, value of scope is function. An example for this could be like

@pytest.fixture(scope="function", autouse=True)
def base_data():
    arg1 = 5
    arg2 = 3
    return (arg1, arg2)

As you can see, it’s very simple to define the various parameters provided by the pytest fixture to its definition. Now this fixture will be automatically used by all the test functions once per test.

Implementing teardown functionality

We often require the teardown function to execute some code after a test run. To implement this, pytest provides execution of fixture specific finalization code by using a yield statement in place of return statement. All the code after a yield statement executes as soon as the test run is complete.

For e.g. when opening a connection to a database in a test, it must be closed irrespective of whether tests fail or pass. So to implement such functionality we can write the fixture as

import pytest
import MySQLdb

@pytest.fixture(scope="module")
def mysql_connection
    db = MySQLdb.connect(host="localhost",
                         user="gutsytechster",     
                         passwd="password",
                         db="mydb")
    cur = db.cursor()
    yield cur
    print("teardown database connection")
    db.close()

The last two statement will be executed when the last test in the module has finished its execution regardless of the status of tests.

conftest.py

When some fixtures are to be used across many test files then we can define all those fixtures in  a conftest.py file. We then won’t need to import the fixtures we want to use in the tests. It will automatically be discovered by pytest. Damn! It’s such an amazing feature. Isn’t it?

Assertions in pytest

We have already seen the examples of assertions in above demo tests. Making an assertion in pytest includes using the python’s assert statement. When the condition given along with assert evaluates to True, it does nothing but when it evaluates to False, the test fails and shows the detailed failure.

Though to assert if a specific exception is raised or not pytest uses a different construct. To make assertion about raised exception, we use pytest.raises as a context manager. For e.g.

import pytest

def test_zero_division():
    with pytest.raises(ZeroDivisionError):
        1 / 0

This test will pass because the defined exception in pytest.raises occurs within its context. If we need to have the access to the actual exception info we may use it as

import pytest

def myfunc():
    raise ValueError("Exception 123 raised")

def test_myfunc():
    with pytest.raises(ValueError) as excinfo:
        myfunc()
    assert "123" in str(excinfo.value)

excinfo is an ExceptionInfo instance which can be used to access its .type, .value or .traceback.

Check code coverage

pytest provides a variety of command line options which gives the flexibility to use this tool in some amazing ways. One such option helps us to define the code coverage of our tests. We can see the code coverage by running the following command

pytest --cov

And you’ll see the detailed coverage explanation of every file/module present in your project structure.

I feel this would be enough for you to get started with writing tests using pytest. Though, there are many other features and options that can be utilized when using pytest. You should definitely have a look on them.

References and Further reading

  1. https://docs.pytest.org/en/latest/index.html
  2. https://pythontesting.net/framework/pytest/pytest-introduction/
  3. https://pythontesting.net/framework/pytest/pytest-fixtures-nuts-bolts/

If you find any mistake or want to give any suggestion, don’t hesitate to write below in the comment section. I’ll be very glad to take any feedback. Meet you next time.

Till then, be curious and keep learning!

Advertisements

by gutsytechster at May 12, 2019 06:05 PM

Piyush Aggarwal (brute4s99)

hello GSoC

PROLOGUE

Peace in our time. :)

Tony Stark

I have seen some of my college mates bag internships to work at hot technologies like Deep Learning, Android, at places like DRDO and Microsoft, with stipends more than the income of an average Indian fresh out of college. Having grown up as a king of the hill, I obviously started feeling sick of always catching up. When I started my FOSS journey on 16 June 2018 with #dgplug, little did I know where would I be one year later. :)

Ever since I came back from the PyCon India 2018, Hyderabad, I’ve been a lot more involved in public speaking, volunteering at technical events and KDE. This is a recap, that aims to cover the important events of the past year that led to this- a GSoC 2019 student.

BABY STEPS

The training at #dgplug taught me the basics of IRC communication, guidelines of how to communicate in mailing lists, and also seeded this habit of regular blogging within me. With the knowledge from my summer training, I headed on to the first software that came to my mind - pandas.

When I started contributing to pandas, it took me over a couple months at my PRs to get merged in master. I contributed both in code, and documentation, and fortunately, I had the rare chance of meeting my PRs’ reviewer Marc Garcia at PyCon India 2018. He came in as the devsprint mentor for pandas. It’s nothing short of a magical experience to have the reviewer explaining to you in person, stuff like development flow and stuff that is usually left behind the internet. After meeting so many awesome people in real life, I understood this is my gig. After a couple of merges in pandas, I decided to start contributing in a much bigger organization on a software that I use daily and is also used by the masses i.e the end users. Since I used plasma DE by KDE, along with a host of other software that came along, I locked my eyes on KDE.

I started talking about contributing to KDE on #dgplug, and Kuntal Majumdar (hellozee) there advised me to visit #kde-devel and #kde-soc. From there on, my KDE journey began and I started lurking in the IRC channels since October - mid. There I met Valorie Zimmerman, a KDE Admin, who helped me a lot to get to a project I love to use. It was she who helped me realize KDE Connect was the project I was looking for.

If you contribute to a project you love, you will keep contributing to it for a long time, even without any sort of motivation

Right around the end of October, I joined the KDE Connect’s IRc-bridged Telegram channel and asked for guidelines to start with developing KDE Connect. Luckily, many of the developers at KDE Connect help almost instantaneously, no matter what time of the day, thanks to a wide-spread developer group.

There are not a lot of developers at KDE Connect, but luckily they are spread across the globe and cover all possible times of the day.

Kudos to remote working!

TOUGH CHOICES

There can be a lot of points in your journey, when you feel a tension because you’re finally able to wrap your head around an awesome project that you wanted to work on, but then some opportunities popped up that could slingshot you right into the middle of hot technologies that interest you.

One such choice came to me right around mid November, when I got shortlisted for the PyTorch Scholarship. This scholarship could help me get into the thick of Deep Learning, a technology I love to read about and work on occasionally. Albeit, I decided to drop it and focus on one thing at a time.

MOVING ON

I started my contribution with a pretty obvious bug, which involved wrong titles for some KDE Connect plugins in the Android app. Luckily(for me :p) no one noticed it yet, so I had a good chance to start working on Android (which I used to hate, or maybe dread). I made the patch and it took about a week or so to get merged. It was a great feeling to get a patch in kDE Connect, but it took about a month of lurking and stumbling and asking questions, and then I ultimately had my first teensy contribution by mid - December.

PREPARATION TO PROPOSAL

It was right around this time that Albert Vaca, maintainer of KDE Connect, mentioned that KDE Connect for Windows can be a GSoC project this time around. I was elated to see that message, and asked him if I could target it for my summer. He allowed it, but it needed a good enough proposal, and a ton of contributions.

Mentors need to believe in the student(s) they pick; while GSoC can get new lifelong contributors to awesome projects, on the other hand it can also end up with useless code dump and a lot of headache for the prior developers.

Since then, while fishing for more fairly easy bugs to fix, I also started to check out the Windows build and what challenges I would be facing while working on it.

I also used to talk to jambon_t, a then-active contributor to the Windows build, and learn some know-how of the Windows development. For a school student, he is way more knowledgeable in working on the Windows build.

A TWIST

Some sinister chemical sneaked into Delhi’s fast food stalls in early Feb. Thanks to it, many people, including myself, got very badly ill. I was unable to work, or even visit college for most of February because of an upset stomach and a heavy head.

PROPOSAL

After a couple months of 2019, I was able to build a POC that demonstrated native notifications on Windows, that were forwarded by the connected android device. After a couple hours of sending the screenshots on KDE Connect Telegram group, I got a message by Simon Redman, a developer at KDE Connect. It read:-

Simon Redman

It’s a really awesome feel when someone actually recognizes your efforts after long.

Since then, I worked hard on creating the perfect proposal. I got a lot of feedback from various KDE Developers and reviewers. The expertise of Windows development by Hannah Von Reth, the maintainer of Craft, was a huge help in pretty much every bump I encountered while researching for the Windows port. She is also one of my mentors at the GSoC project, along with Simon and Albert.

After tons of suggestions and fixes, I submitted my final proposal.

After that, I did not stop. The prime motivation when I started contributing to KDE Connect, was giving back to the community. With that thought, I kept contributing and contacted my mentors in times of trouble.

FAST-FORWARD TO MAY 6, 2019 : UTC 17:30

30 minutes to Google’s announcement for students selected this year for the GSoC, I started listening to calming Indian songs like “Breathless”, “Kun Faya Kun” et al. I poked my mentors, but they refused to let the cat out of the bag. I waited for the time while fixing my Windows devflow which somehow started crashing.

Right when 1800 struck, I looked at the browser screen and hit refresh at my GSoC dashboard. The servers must have got under heavy load, just like the CBSE results day in India. I went ahead to fire up another browser window in a state of panic, and lo! The page loaded itself:-

GSoC Dashboard

EPILOGUE

Well, I am in now, with responsibilities of a GsoCer on my shoulders. This is the link to my GSoC project.

I just wish for the strength to push (pun intended) through all the proposed milestones and to be able to make an awesome release before Akademy 2019. ^-^

This post is bit late to be a welcome to GSoC post, but this post marks the day I did my first merge as a KDE Developer ! I look forward to many more interactions with the KDE Community, now that I’ve Set Sail For Adventure!

Stay safe and make the internet a healthier place!

May 12, 2019 12:31 PM

Prashant Sharma (gutsytechster)

Mock Testing in Python

Heya! I’ve been busy with my exams for some time. Though that doesn’t imply that I didn’t learn anything new. I have learned something crucial, something which plays an important role if you are writing a good piece of code and that something is known as testing. Testing in python can be done in various ways but this post will be dedicated to use mocks in python.

Mock is a library which resides in standard python testing module unittest. So unless you are using an old python version such as Python2, it would already be available for you to use it. In case you are using an old python version you’d need to install it using pip as

pip install mock

And now you are ready to get along with this post.

What’s the need?

If you are already familiar with testing in python, you might be aware of unittest module, the standard module used for python testing. Though some of you might’ve used pytest as well. Mock is not another testing framework, rather it is a library which provides utilities to solve a particular problem in testing. Let’s understand it with an example

Suppose that your piece of code interact with a third party API. It deals with making requests and getting responses. When testing that piece of code, you’d need to create an actual request to the API. That would actually test your code very well. But as soon as your projects starts to scale or you might need to run the tests every other minute, that would lead to some problems such as

  • Dealing with third party API is a time costing approach. It takes a reasonable amount of time to complete a request-response cycle, especially when it involves uploading a file or image. Interacting with API like that every time we run tests definitely won’t be a good approach.
  • Some APIs costs you money depending on the number of requests made to it. So if you are hitting the API every time you run the tests, that would give your wallet a really big deal.
  • There could be situations when the API service is down for some maintenance work and throughout the time you won’t be able to run the tests successfully.

Therefore to avoid such situations, we use mock objects.

What is Mock testing?

Now since you know the need of using mock objects, let’s understand what they are and how one can use them. Wikipedia defines a mock object very well as

In object-oriented programming, mock objects are simulated objects that mimic the behavior of real objects in controlled ways, most often as part of a software testing initiative. A programmer typically creates a mock object to test the behavior of some other object, in much the same way that a car designer uses a crash test dummy to simulate the dynamic behavior of a human in vehicle impacts.

As what is said above, a mock object acts like a dummy object which can be used in place of real object. But how do we expect it to work like a real object. Well that’s simple! we set the expectations.

Ah! I hear you say “Talk is cheap show me the code!”. Okay then, let consider the following code excerpt to understand it easily

# getMyProfile.py

import requests
from settings import access_token

def get_response():
    url = "https://api.linkedin.com/v2/me"
    response = requests.get(url, headers={'Authorization': f"Bearer {access_token}"})
    return response


def format_response():
    response = get_response()
    json_response = response.json()
    first_name = json_response.get('localizedFirstName')
    last_name = json_response.get('localizedLastName')
    return f"{first_name} {last_name}"


print(format_response())

The above code sends a GET request to a LinkedIn API endpoint and gets the detail about myself. It then prints my name taken out from the corresponding response.

Now we want to test these two functions. So for that we use the unittest.mock library which helps us to create the mock objects. Let’s go ahead and create a test file to test the functions present in above file.

unittest.mock contains within it a large number of functions and properties which can be used to simulate the real objects. Before testing the above code let’s understand some basics of how mock library  works in python

>>> from unittest import mock
>>> mock_object = mock.Mock()
>>> mock_object
<Mock id='140023121454864'>

The above piece of code shows that we can create a mock object by calling Mock() method and it’s done. It’s as simple as it seems.
There is a MagicMock() method as well that is a subclass of Mock() with all the magic methods pre-created and ready to use. You know what magic methods are, don’t you? In case you don’t, these are the inbuilt methods that start and end with a double underscore in python such as __str__, __add__, __sub__, __init__ etc. They all called when a certain specific operation is performed eg __str__ returns its value when you try to print an object. Let’s apply the same on a MagicMock object

>>> mock_object = mock.MagicMock()
>>> mock_object.__str__.return_value = '5'
>>> print(mock_object)
5

As you can see, we have tried to configure the mock object by assigning a return value to it and that is how we actually set expectations to a mock object which defines how it should behave.

patch() method

We’ve understood the basics of mock objects, let’s head back to our code excerpt and write the tests for it. To mock classes or functions residing at some specific location, we use patch() method which resides in unittest.mock library. The patch() method is helpful when we want to mock an object temporarily. It can be used as a decorator as well as a context manager.

patch() method takes the path to the function or class which is to be mocked.

Where to patch?

The basic principle which is to be focused upon whenever using patch is that we patch the object where it is used which is not necessarily the same place where it is defined. We’ll see it in a few moment, so be alert when you see a patch() definition.

Case I: As a decorator

When used as a decorator, it returns the MagicMock object of the given function or class as an argument to the test function it is applied to.
Let’s first write the test for get_response function. The function sends a GET request to the given url with given headers and returns the response. But we don’t want to send the actual GET request in test. Hence we’d need to mock the requests.get() method for it.

import requests
from unittest import mock

from settings import access_token
from getMyProfile import get_response, format_response


@mock.patch('getMyProfile.requests.get')
def test_get_response(mock_get):
    url = "https://api.linkedin.com/v2/me"
    headers = {'Authorization': f"Bearer {access_token}"}
    mock_get.return_value = requests.Response()
    response = get_response()
    mock_get.assert_called_once_with(url, headers=headers)
    assert isinstance(response, requests.Response)

I have defined the patch decorator on the test_get_response method. Notice the path I have given to it. I have patched the requests.get method that resides inside the getMyProfile module rather than wherever it might have been defined. Hence, it will replace the actual requests.get method within the getMyProfile file with the mock_get object which will be a mock object.

I’ll jot down the execution of above code in following points

  • We set expectations by configuring the return_value attribute on the mock_get object by setting it to the requests.Response() type. It implies that mock_get should return a Response object which is what actual requests.get() method returns.
  • We call the get_response function and collects its output within response variable.
  • unittest.mock library provides various assertion methods one of which has been used in above defined test. We’ve used assert_called_once_with() method which as its name suggest assert if a mock object is called exactly once with specified arguments.
  • Next assertion checks if the response provided by get_response function is a Response object.

Now let’s complete the test for the other function as well. You might be able to understand it more easily now

...

@mock.patch('getMyProfile.requests.Response.json')
@mock.patch('getMyProfile.get_response')
def test_format_response(mock_get_response, mock_json):
    mock_get_response.return_value = requests.Response()
    mock_json.return_value = {
        'localizedFirstName': 'Prashant',
        'localizedLastName': 'Sharma'
    }
    formatted_response = format_response()
    mock_get_response.assert_called_once()
    assert formatted_response == 'Prashant Sharma'

The format_response function involves taking output from get_response function and returning the full name.

Let’s jot down the functioning of above test as well in few points

  • We don’t want to actually call the get_response function within the tests as that will involve sending a request to the API. Therefore we can create a mock object of that function and use it within our test. So we give the path to the get_response method to the patch method which provides the mock object in the form of mock_get_response.
  • format_response function also takes the json output from response using response.json() method. Since, we are not getting the actual response, we need to simulate the json method as well. So, here as well we give patch , the path to the json() method where it’s been used which returns the mock object in the form of mock_json.
  • We then set the expectations on both the mock objects and calls the actual format_response().
  • We make the assertion using one of the assertion methods provided by unittest.mock library i.e. assert_called_once() and assert the return value of the function.

You might have noticed that we take the mock object within the function definition in the same order as the patch is defined ie from downwards to upwards.

And that’s it. This is how mocks work and simplify the testing. The scope of mock objects when used as a decorator lasts until the block of the function or class on which it is applied.

Case II: As a context manager

The usage of patch remains same, only the way it is defined changes. A patch is used as a context manager when we want to apply it within a block or context. If test_get_response has to be written using the patch as a context manager, it would seem like this

def test_get_response():
    url = "https://api.linkedin.com/v2/me"
    headers = {'Authorization': f"Bearer {access_token}"}
    with mock.patch('getMyProfile.requests.get') as mock_get:
        mock_get.return_value = requests.Response()
        response = get_response()
        mock_get.assert_called_once_with(url, headers=headers)
    assert isinstance(response, requests.Response)

Here we created the context using the python with statement. Now mock_get object works within the with block. That’s why all the statements which need to use mock_get object are defined within the block. As you may see the functioning remains same, what differs is the way it looks.

I like to use the patch as a decorator but that’s a personal choice. You may like to use it as a context manager. That sometimes is subjective to the requirement and the code as well.

Hush! this was a quite long post. Though I hope, I was able to help you understand the basics of mock testing in python in an easy way. But, it’s just a introduction and a basic idea of how does it work. There are a lot of methods and properties that awaits you to read about them and use them. I leave the rest to your curiosity.

References and further reading

  1. https://docs.python.org/3/library/unittest.mock.html
  2. Lisa Roach – Demystifying the Patch Function – PyCon 2018
  3. Ana Balica – To mock, or not to mock, that is the question – PyCon 2016
  4. Mocking External APIs in Python

Well then, see you next time. Till then be curious and keep learning!

by gutsytechster at May 12, 2019 07:24 AM

May 07, 2019

Piyush Aggarwal (brute4s99)

recovering arch from hell

Rebuilding an Arch

easier than it looks

PROBLEM

Not clear, but looks like misconfigured packages after multiple installations, uninstallations and re-installations of packages and Desktop Environments

PROLOGUE

So today I had problems that caused KDE Plasma to not acknowledge my laptop as a laptop. In other words, my Arch was on the edge of collapse.

BABY STEPS

So, I tried reinstalling all the packages of my installation in one command, like so

# pacman -Qenq | sudo pacman -S -

But as you can see the post hasn’t ended here, it didn’t pan out.

SOLUTION

After hours of help at #archlinux and #kde-plasma, I found this Forum page that gave me just the right instructions!

  1. First up, I removed all the orphaned/unused packages rotting away in my system.

    # pacman -Rns $(pacman -Qtdq)
  2. next, I force-reinstalled all the packages I had in my installation.

    # pacman -Qqen > pkglist.txt
    # pacman --force -S $(< pkglist.txt)

EPILOGUE

Now my installation is sweet as candy with no loss of any personal configs, and everything is perfect again!

😄 🎉

May 07, 2019 10:01 PM

ABCs of Unix

UNIX

A is for awk, which runs like a snail, and
B is for biff, which reads all your mail.
C is for cc, as hackers recall, while
D is for dd, the command that does all.
E is for emacs, which rebinds your keys, and
F is for fsck, which rebuilds your trees.
G is for grep, a clever detective, while
H is for halt, which may seem defective.
I is for indent, which rarely amuses, and
J is for join, which nobody uses.
K is for kill, which makes you the boss, while
L is for lex, which is missing from DOS.
M is for more, from which less was begot, and
N is for nice, which it really is not.
O is for od, which prints out things nice, while
P is for passwd, which reads in strings twice.
Q is for quota, a Berkeley-type fable, and
R is for ranlib, for sorting ar table.
S is for spell, which attempts to belittle, while
T is for true, which does very little.
U is for uniq, which is used after sort, and
V is for vi, which is hard to abort.
W is for whoami, which tells you your name, while
X is, well, X, of dubious fame.
Y is for yes, which makes an impression, and
Z is for zcat, which handles compression.

— THE ABCs OF UNIX

May 07, 2019 08:01 PM

contributing to pandas

pandas
pandas: powerful Python data analysis toolkit

for PyDelhi DevSprint 02/02/19

pre-DevSprint reading material:-

Homework

0. Remove existing pandas installation

```
pip uninstall pandas
```

1. Fork me!

2. Clone the fork to your PC.

3. Install pandas from source.

  • cd into the clone and install the build dependencies.

    python -m pip install -r requirements-dev.txt
  • Build and install pandas. (takes ~20 minutes on an i5 6200U with 8GB RAM)

    python setup.py build_ext --inplace -j 4 
    python -m pip install -e .

Background

Work on pandas started at AQR (a quantitative hedge fund) in 2008 and has been under active development since then.

Chat with more pandas at Gitter.im!

Some Tips

Bad Trips

I accidentally rebased on origin/master. That was ~350 commits behind upstream/master !

Steps taken:-

  • reverted HEAD to just before rebase
  • merged upstream/master into origin/is_scalar
  • updated origin/master to get NO diffs in upstream/master and origin/master
  • ran git rebase origin/master and fixed a conflict in doc/source/whatsnew/v0.24.0.rst
  • pushed to origin/is_scalar.

Stay safe and make the internet a healthier place!

May 07, 2019 08:01 PM

the git flow

INTRODUCTION

A common question for anyone stepping foot into the world of FOSS contributions is- How to start? This post aims to be the post I wish I had read an year ago when I started my journey.

BABY STEPS

The commonly known work flow for git is as follows:-

  1. Make a fork of the target project.
  2. Clone the fork to your local dev machine.
  3. Set a remote for upstream.

Contact the project team, introduce yourself and ask questions related to the project. Read more about this here. This is super important!

  1. Test the project yourself.

  2. Look for any issues or bugs/ something to fix in the project.

  3. This step should be performed everytime when you are about to make a new branch, or want to update the master branch of your fork. Perform the following two commands to update master branch of the fork:-

      git fetch upstream
      git merge upstram/master
      *resolve the conflicts, if any*

It’s always a good idea to make a different (feature) branch for every feature/ issue/ bug you work on. While this keeps all your diverse efforts in one single folder, it maintains them completely separate from each other. This way, you don’t have to worry about anything but just the branch name! Keep memorable and simple branch names.

  1. Make a new branch titled something relevant to the thing you wish to fix, say XYZ.

  2. Make the fix, push it to origin (i.e your fork) remote, as the XYZ feature branch.

  3. Make a Pull/Merge Request.

      1. Wait for review.
      2. Make necessary fixes.
      3. Repeat from step 8.1 while not approved by every reviewer.

Most probably, you will now be asked to rebase your branch. It just means to perform a couple of commands that will replay all your commits on the new, latest stuff from upstream/master.

  1. Perform the following two commands to rebase your XYZ feature branch:-

      git fetch upstram
      git rebase upstream/master
      *resolve the conflicts, if any*

The Stash

Many times you might want to start working on some other feature right away! In such cases usually you would have some uncommitted files in your current branch.

This usually happens in the face of release date deadlines. The git-stash can save your uncommitted changes on a stack like data structure. This is done by the command:-

      git stash
      *your current working branch will be clean now, i.e there will be no uncommitted changes left*

Don’t worry! They are in the stash, safe and sound. You can save multiple sets of uncommitted changes in the stash by using git stash every now and then. To see the list of all such sets of uncommitted changes, use the following command:-

      git stash list

Just perform this command to get the most recently stashed changes back in your current working branch:-

      git stash apply
      *yes, you have the freedom to use `git stash` at one branch, and then checkout another branch and do `git stash apply`. It will work.

If you wish to retrieve any other set, refer to the index of stash, for example:-

stash@{0}: WIP on telephony_unknown: 7df58d0d SVN_SILENT made messages (.desktop file) - always resolve ours
stash@{1}: WIP on master: 7df58d0d SVN_SILENT made messages (.desktop file) - always resolve ours
stash@{2}: WIP on timestamp: 9ec0d04f SVN_SILENT made messages (.desktop file) - always resolve ours
(END)
      git stash apply stash@\{1\}
      *this will apply the set of changes in index 1 i.e stash@{1} to the current working branch

CONCLUSION

This should be enough to get you going with the adventures of git.

Git is focussed on freedom by design. You can do a lot of stuff, and you can also undo it as you go, so don’t fret to play with this empowering tool!

signing off now; later! :)

Stay safe and make the internet a healthier place!

May 07, 2019 08:01 PM

multi-booting

INTRODUCTION

I am an Arch Linux user by day, but recently I needed constant access to Windows 10 OS to develop KDE Connect - an awesome project by some smart-working developers from across the globe, for Windows.

While working with the team, I also had to install Ubuntu to test a new release for the Ubuntu users.

All this boils down to a system that already contains Arch Linux, to house Windows 10 and Ubuntu along, on a 500GB hard disk.

I have also mentioned a rookie mistake in this blog post, so do take it with a pinch of salt.

CHALLENGE 1: One storage device, many partitions

The thing is, there are many partitions required to have such a system, and legacy partitioning systems allow for just 4 at max. Enter UEFI, that allows any number of partitions on a single storage device.

STATUS: Arch Linux `OK` ; Ubuntu `TO_BE_INSTALLED` ; Windows `TO_BE_INSTALLED`

CHALLENGE 2: Getting Windows 10 media to boot in UEFI mode

For this, I used Rufus to create my installation media, and supplied the latest Windows ISO recieved from the Media Creation Tool provided by Microsoft.

Luckily, Windows installed itself nicely along with Arch Linux, and I was able to dual boot just fine after the installation, with GRUB2 from Arch Linux.

STATUS: Arch Linux `OK` ; Ubuntu `TO_BE_INSTALLED` ; Windows `OK`

TRIPLE BOOT TIME!

I went for Ubuntu 18.04 LTS here because it was the latest edition with LTS.

I simply installed it on a separate ext4 partition at the end of my HDD (using that something else option).

I’m not sure what happened here, but it might have did something to my prior GRUB config managed by Arch Linux.

On next boot, the GRUB settings of Arch Linux showed up, which had options for Windows and Arch Linux.(no Ubuntu OS here)

CHALLENGE 3: Get Ubuntu OS to boot

Then I went ahead and booted into Arch to run a grub-mkconfig -o /boot/grub/grub.cfg, because it didn’t know about Ubuntu OS. After I rebooted the system, Ubuntu’s GRUB config greeted me, that did not have Arch Linux as a boot option.

I lost access to Arch Linux now. I was not happy, to say the least.

STATUS: Arch Linux `NOT_BOOTING` ; Ubuntu `OK` ; Windows `OK`

CHALLENGE 4: Get Arch Linux to boot

Next, I tried running the same grub-mkconfig -o /boot/grub/grub.cfg in Ubuntu OS.

I got options for Arch Linux then, but they didn’t work for me (poor Arch support in Ubuntu 18.04?).

Then I fired off an Arch Linux Live USB and decided to try to get GRUB reinstalled from my Arch Linux installation.

  • re-formatted the /dev/sda1 (EFI) partition.
  • arch-chrooted into my Arch installation and force-reinstalled all my arch linux packages by my previous post.(to get linux firmware images in /boot).

I could’ve done it by reinstalling just the firmware too, as <Namarggon> on #archlinux (IRC) suggested.

  • ran grub-install and grub-mkconfig commands from my GitHub gist - ARCH COMMANDS
  • ran genfstab command from that GitHub gist.

(kudos to <GreyShade> and <iovec> for helping me out on this one!)

I have access to my Arch Linux and Ubuntu now.

STATUS: Arch Linux `OK` ; Ubuntu `OK` ; Windows `NOT_BOOTING`

UPDATE:

took a couple of commands: bootrec /fixmbr and bootrec /rebuildBCD from a Windows OS Installation Media. They installed the new EFI files in the EFI partition, and I finally had access to all three systems! \o/

STATUS: Arch Linux `OK` ; Ubuntu `OK` ; Windows `OK`

CONCLUSION

I obviously should not have removed the EFI partition, since that step increased the work needed to set up other OSes. If you happen to find any other weak links or better procedure, please do share it with me over the mail or twitter!

Stay safe and make the internet a healthier place!

May 07, 2019 08:01 PM

Kuntal Majumder (hellozee)

Summer is coming...

Note: Not a Game of Thrones fan, yet to watch even a single episode, but I wanted a catchy title This will be a story, so get prepared to be bored, 😸.

by hellozee at disroot.org (hellozee) at May 07, 2019 08:48 AM

Summer is coming...

Note: Not a Game of Thrones fan, yet to watch even a single episode, but I wanted a catchy title This will be a story, so get prepared to be bored, 😸.

May 07, 2019 08:48 AM

May 01, 2019

Priyanka Sharma

Introduction to Python Programming

What is Python

Python is an interpreted,  object-oriented, high-level level programming language with dynamic semantics. Python was developed by Guido Van Rossum in the late eighties and early nineties at the National Research Institute for Mathematics and Computer Science in the Netherlands.

Features of Python:-

  • Less line of codes than other languages (fast execution of Ideas)
  • Platform independent
  • Open Source
  • Object Oriented
  • It supports functional and structured programming methods
  • It can be used as a scripting language or can be compiled to byte-code for building large applications
  • It provides very high-level
  • level dynamic data types and supports dynamic type checking
  • It supports automatic garbage collection
  • It can be easily integrated with C, C++, COM, ActiveX, CORBA, and Java

What big companies use Python?

  • Google (Youtube)
  • Facebook (Tornado)
  • Dropbox.
  • Yahoo.
  • NASA.
  • IBM.
  • Mozilla.
  • Quora

Google written in Python?

Google App Engine is an eminent sample of Pythonwritten application, it allows
building web applications with Python programming language, using its rich collection
of libraries, tools and frameworks. Python is everywhere at YouTube.
code.google.com – main website for Google developers.

Is Python for web development?

Python can be used to build server-side web applications. While a web framework is not
required to build web apps, it’s rare that developers would not use existing open source
libraries to speed up their progress in getting their application working. Python is not
used in a web browser.

What is the framework for Python?

“What is a web framework?” is an in-depth explanation of what web frameworks are
and their relation to web servers. Django vs Flask vs Pyramid: Choosing a Python Web Framework contains background information and code comparisons for similar web applications built in these three big Python frameworks

Installing Python in Linux:

If you are using Ubuntu 16.10 or newer, then you can easily install Python 3.6 with the following commands:

$ sudo apt-get update
$ sudo apt-get install python3.6

To see which version of Python 3 you have installed, open a terminal and run

$ python3 --version

 

py.png

If you are using other Linux distribution, chances are you already have Python 3 pre-installed as well. If not, use your distribution’s package manager. For example on Fedora, you would use dnf:

$ sudo dnf install python3

For more: Refer to this link.

Installing Python in Windows:

How to start the Python?
First of all install the Python 2.7.12 on your system. After installation it will create a
directory in C: drive named Python27. In Python27 there is a file python.exe. Copy the path of python.exe from address bar.

To set the path of python in “path” system variable:

  1. Copy the path of python.exe from address bar.
  2. Right click on Computer then click on Properties
  3. Click on Advanced System Settings then click on Environment Variables.
  4. Choose path variable and click on Edit button.
  5. Deselect the path and stuff a semicolon at last and paste the copied path.
  6. Now click on OK button.

First Python Program:-

Open the command prompt, type python and press enter key. The python prompt will open. Now type the following code:
code:
print “Hello Python!”
Press enter key it will display following output:-
output:
Hello Python!

Make simple calculator using python:-

On command prompt make a directory PythonProgs using md command.
Then use cd command to open PythonProgs directory.
Now type notepad SimpleCalc.py. The notepad editor will open and type the following code:

code:

a=input(“Enter first number : ”)
b=input(“Enter first number : ”)
print “Summation = ”,(a+b)
print “Subtraction = ”(a-b)
print “Multiplication = ”(a*b)
print “Division = ”,(a/b)

Now save the file SimpleCalc.py and close the file.
Now at command prompt type python SimpleCalc.py it display the following output:-
output:

Enter first number : 10
Enter second number : 5
Summation = 15
Subtraction = 5
Multiplication = 50
Division = 2

What is the return in Python?

The print() function writes, i.e., “prints”, a string in the console. The return statement causes your function to exit and hand back a value to its caller. The point of functions in general is to take in inputs and return something. The return statement is used when a function is ready to return a value to its caller.

What is the input function in Python?

Input can come in various ways, for example from a database, another computer, mouse clicks and movements or from the internet. Yet, in most cases the input stems from the keyboard. For this purpose, Python provides the function input(). Input has an optional parameter, which is the prompt string.

Do you need semicolons in Python?

Python does not require semi-colons to terminate statements. You can also use them
at the end of a line, which makes them look like a statement terminator, but this is legal
only because blank statements are legal in Python — a line that contains asemicolon at
the end is two statements, the second one blank.

Applications for Python

Web and Internet Development
Python offers many choices for web development:

  • Frameworks such as Django and Pyramid.
  • Micro-frameworks such as Flask and Bottle.
  • Advanced content management systems such as Plone and django CMS.

Python’s standard library supports many Internet protocols:

  • HTML and XML
  • JSON
  • E-mail processing.
  • Support for FTP, IMAP, and other Internet protocols.
  • Easy-to-use socket interface.

And the Package Index has yet more libraries:

  • Requests, a powerful HTTP client library.
  • BeautifulSoup, an HTML parser that can handle all sorts of oddball HTML.
  • Feedparser for parsing RSS/Atom feeds.
  • Paramiko, implementing the SSH2 protocol.
  • Twisted Python, a framework for asynchronous network programming.

Scientific and Numeric
Python is widely used in scientific and numeric computing:

  • SciPy is a collection of packages for mathematics, science, and engineering.
  • Pandas is a data analysis and modeling library.
  • IPython is a powerful interactive shell that features easy editing and recording of a work session, and supports visualizations and parallel computing.
  • The Software Carpentry Course teaches basic skills for scientific computing, running bootcamps and providing open-access teaching materials.

Education
Python is a superb language for teaching programming, both at the introductory level and in more advanced courses.

  • Books such as How to Think Like a Computer Scientist, Python Programming: An Introduction to Computer Science, and Practical Programming.
  • The Education Special Interest Group is a good place to discuss teaching issues.

by priyanka8121 at May 01, 2019 06:09 AM

April 11, 2019

Prashant Sharma (gutsytechster)

How to post on LinkedIn via its API using Python?

Hey folks! This time we are going to play with LinkedIn API. Well, we all know what APIs are used for. Don’t we? Of course we do. Though in case you don’t, just go through this article once and you’ll understand what APIs are.
Coming back to what the today’s topic is about ie to use LinkedIn API to post something on your LinkedIn profile via Python. I must say, LinkedIn has provided quite a detailed and helpful docs for this but sometimes, we want the examples to be used along with code. I am going to write a python script that would post on your LinkedIn profile using its API.

Getting Access Token

Before we start writing code, first we need to do some preparations. Starting with getting access token from LinkedIn which allow us to use its API as an authenticated user. For that we need to go here and create an app. When you’ll start filling up the form for creating the app, you might get confuse as to what should you fill up in the company’s field. Since we are using it for testing purpose, you may either create a company or select a random company from the available choices. The choices would start showing up as soon as you start filling up that field. I selected a company named Test Company and guess what, that actually didn’t have any info. So I think, devs  must have created that page for testing purpose 😛. So, it got me saved.

As soon as the app is created, you can go to the My apps option that would be available on your profile and find your newly created app there. Get into the app and click on Auth option and search for Permissions field. You would see that there is no permission yet and for sharing on LinkedIn via its API we need to have the permissions. But don’t you worry, it actually takes some time about a day or so, to review your app and grant you the permissions.

Once the permissions are granted, follow this guide to get the access token. If you stuck anywhere in between, don’t hesitate to ask in the comments below.

Writing Python Script

Since, you have acquired the access token we can now use the API as an authenticated user. We’ll be using it to create valid authenticated requests. Now create a directory anywhere in your system and name it as linkedin-post. Also make sure to create a virtual environment so that its dependencies doesn’t interfere with your other projects.

After creating virtual environment, install requests module of python using pip

pip install requests

It will install requests along with some of its dependencies. We’ll be using it to create GET or POST requests to the API. Now then create a file post_on_linkedin.py and start writing the following

import requests
import os

Apart from requests module, I have also imported the os module. We’ll know why so, in a few minutes. Keep reading for now. Let’s write few more lines

import requests
import os

access_token = "<your access token here>"

We have assigned the access token generated earlier to a variable so that using it becomes easier. In case we need to change it or use it at different places, using a variable would be much easier. But there is a problem here. Can you guess what could that be? So the thing is that many times we upload our code to a code hosting service like GitHub, Bitbucket, Gitlab etc. So keeping such confidential credentials inside of code would be risky. It won’t be a good approach.

To resolve this, we use something known as environment variables. We define our private credentials in a file called .env in the form of key value pair and then use them as a variable in our source file. But we make sure that we don’t push the .env file to the code hosting services. To read environment variables, python has an awesome module called python-dotenv. Let’s go ahead and install it using pip as earlier

pip install python-dotenv

Now create a .env file in the same directory as the source file and write the following content to it.

ACCESS_TOKEN="<your access token here>"

Here the environment variable is ACCESS_TOKEN and its value would be the actual token you assign to it. To use this environment variable, we’ll need to make few changes to our source file

import requests
import os

from os.path import join, dirname
from dotenv import load_dotenv

# Create .env file path
dotenv_path = join(dirname(__file__), '.env')

# load file from the path
load_dotenv(dotenv_path)

# accessing environment variable
access_token = os.getenv('ACCESS_TOKEN')

We have imported a few more modules and used them to set the path for .env file and to load it. Once the file is loaded, using an environment variable is as simple as calling the function os.getenv() with the key, the value is assigned to. In our case, ACCESS_TOKEN is the one to be used here.

Now, let’s proceed further and add one more line

...
api_url_base = 'https://api.linkedin.com/v2/'

We have defined another variable with the api’s base url to start off every url with LinkedIn API. We can append to it as needed.

For sharing on LinkedIn, the request will always be a POST request to the api endpoint defined here along with the post data. If you would notice, the first parameter to be sent along with request is author and its value is Person URN. To retrieve the Person URN, we send a GET request to the endpoint defined here. The ID parameter provided by the response from this GET request is the Person URN. Since this ID value is also a private credential, we’ll keep it in .env file and use it through environment variable.

#.env file

ACCESS_TOKEN="<your access token here>"
URN="<your Person URN here>
# post_on_linkedin.py

...
access_token = os.getenv('ACCESS_TOKEN')
urn = os.getenv('URN')
author = f"urn:li:person:{urn}"

The URN is used to define the author parameter. We have used f-strings to substitute the value of urn in the author string. Apart from the post data that is to be sent along with the POST request, we also have to define headers.

...
headers = {'X-Restli-Protocol-Version': '2.0.0',
           'Content-Type': 'application/json',
           'Authorization': f'Bearer {access_token}'}

You may have noticed that we have used the access_token in the Authorization header and this is how the API authenticates us. We have to send these headers with every request when we want to share on LinkedIn.

Great Work till now! Just a few more lines of code and we’ll be able to post on LinkedIn using a simple python script. Let get ahead then.

Now we’ll be defining a function post_on_linkedin(you can name it anything you want) and write the following into it.

...
def post_on_linkedin():
    api_url = f'{api_url_base}ugcPosts'

    post_data = {
        "author": author,
        "lifecycleState": "PUBLISHED",
        "specificContent": {
            "com.linkedin.ugc.ShareContent": {
                "shareCommentary": {
                    "text": "This is an automated share by a python script"
                },
                "shareMediaCategory": "NONE"
            },
        },
        "visibility": {
            "com.linkedin.ugc.MemberNetworkVisibility": "CONNECTIONS"
        },
    }

    response = requests.post(api_url, headers=headers, json=post_data)

    if response.status_code == 201:
        print("Success")
        print(response.content)
    else:
        print(response.content)

Let’s understand this piece of code in few points:

  1. We have defined the api_url. As I mentioned earlier all the request to share on LinkedIn has to be sent to the api endpoint defined here. So we added the ugcPosts to the api_url_base to get the defined endpoint.
  2. We have defined the post_data that has to be sent with the request in the form of python dictionary that closely resembles the JSON format by keeping every key and value within strings. All the necessary parameters are defined along with the values as is notified here.
  3. We have sent the POST request to the api_url with the defined headers and post_data. json parameter takes care of encoding the post_data as JSON.
    We then check if the response.status_code is 201 which signifies the successful execution of request and then we print out response’s content.

That’s it!  We have successfully written a python script that can post on LinkedIn using its API. But you know what? The code still won’t work. Can you guess why? It’s simple, we haven’t called the function yet 😛. What are you waiting for? Just call the function outside its block.

...
post_on_linkedin()

Hurrah! Now go to your terminal and run this python script. I am sure it will work. All the code of this tutorial is hosted here. You can check it for reference. It was fun working with API and doing some amazing stuff. I hope, it was helpful for you. If you find any mistake or want to give any suggestion, feel free to write in the comment section below

References

  1. Share on LinkedIn

  2. https://www.digitalocean.com/community/tutorials/how-to-use-web-apis-in-python-3
  3. https://robinislam.me/blog/reading-environment-variables-in-python/

Meet you in next the post. Till then be curious and keep learning!

by gutsytechster at April 11, 2019 03:43 PM

April 10, 2019

Piyush Aggarwal (brute4s99)

the git flow

INTRODUCTION

A common question for anyone stepping foot into the world of FOSS contributions is- How to start? This post aims to be the post I wish I had read an year ago when I started my journey.

BABY STEPS

The commonly known work flow for git is as follows:-

  1. Make a fork of the target project.
  2. Clone the fork to your local dev machine.
  3. Set a remote for upstream.

Contact the project team, introduce yourself and ask questions related to the project. Read more about this here. This is super important!

  1. Test the project yourself.

  2. Look for any issues or bugs/ something to fix in the project.

  3. This step should be performed everytime when you are about to make a new branch, or want to update the master branch of your fork. Perform the following two commands to update master branch of the fork:-

      git fetch upstream
      git merge upstram/master
      *resolve the conflicts, if any*

It’s always a good idea to make a different (feature) branch for every feature/ issue/ bug you work on. While this keeps all your diverse efforts in one single folder, it maintains them completely separate from each other. This way, you don’t have to worry about anything but just the branch name! Keep memorable and simple branch names.

  1. Make a new branch titled something relevant to the thing you wish to fix, say XYZ.

  2. Make the fix, push it to origin (i.e your fork) remote, as the XYZ feature branch.

  3. Make a Pull/Merge Request.

      1. Wait for review.
      2. Make necessary fixes.
      3. Repeat from step 8.1 while not approved by every reviewer.

Most probably, you will now be asked to rebase your branch. It just means to perform a couple of commands that will replay all your commits on the new, latest stuff from upstream/master.

  1. Perform the following two commands to rebase your XYZ feature branch:-

      git fetch upstram
      git rebase upstream/master
      *resolve the conflicts, if any*

The Stash

Many times you might want to start working on some other feature right away! In such cases usually you would have some uncommitted files in your current branch.

This usually happens in the face of release date deadlines. The git-stash can save your uncommitted changes on a stack like data structure. This is done by the command:-

      git stash
      *your current working branch will be clean now, i.e there will be no uncommitted changes left*

Don’t worry! They are in the stash, safe and sound. You can save multiple sets of uncommitted changes in the stash by using git stash every now and then. To see the list of all such sets of uncommitted changes, use the following command:-

      git stash list

Just perform this command to get the most recently stashed changes back in your current working branch:-

      git stash apply
      *yes, you have the freedom to use `git stash` at one branch, and then checkout another branch and do `git stash apply`. It will work.

If you wish to retrieve any other set, refer to the index of stash, for example:-

stash@{0}: WIP on telephony_unknown: 7df58d0d SVN_SILENT made messages (.desktop file) - always resolve ours
stash@{1}: WIP on master: 7df58d0d SVN_SILENT made messages (.desktop file) - always resolve ours
stash@{2}: WIP on timestamp: 9ec0d04f SVN_SILENT made messages (.desktop file) - always resolve ours
(END)
      git stash apply stash@\{1\}
      *this will apply the set of changes in index 1 i.e stash@{1} to the current working branch

CONCLUSION

This should be enough to get you going with the adventures of git.

Git is focussed on freedom by design. You can do a lot of stuff, and you can also undo it as you go, so don’t fret to play with this empowering tool!

signing off now; later! :)

Stay safe and make the internet a healthier place!

April 10, 2019 07:31 PM

April 02, 2019

Bhavin Gandhi

Using Gadgetbridge and openScale Amazfit Bip

Around 6 months ago the wrist watch that I was using from last 11 years broke. It was not possible to get it repaired as company does not manufacture any parts of it now. I was looking for an alternative but didn’t like any other normal watches available. So I decided to buy Amazfit Bit by Huami. Huami is brand by Xiaomi. While I’m not really interested in the steps count, sleep count, I liked the design of the watch.

by @_bhavin192 (Bhavin Gandhi) at April 02, 2019 02:23 PM

March 23, 2019

Bhavin Gandhi

infracloud.io: HA + Scalable Prometheus with Thanos

This is another blog post I wrote. It is about a tool called Thanos which can be used to setup highly available Prometheus. It was published at infracloud.io on 8th December, 2018. HA + Scalable Prometheus with Thanos

by @_bhavin192 (Bhavin Gandhi) at March 23, 2019 09:25 AM

March 22, 2019

Bhavin Gandhi

infracloud.io: Kubernetes Autoscaling with Custom Metrics

I wrote a blog post about scaling workloads in Kubernetes based on the metrics generated by applications. It was published at infracloud.io on 20th November, 2018. Kubernetes Autoscaling with Custom Metrics

by @_bhavin192 (Bhavin Gandhi) at March 22, 2019 06:18 PM

March 21, 2019

Prashant Sharma (gutsytechster)

YAML 101

Writing configuration files has become much easier as we had come across YAML. YAML is a recursive acronym stands for YAML Ain’t Markup Language. But guess what? Initially it was said to mean Yet Another Markup Language but then it was repurposed as to become data oriented rather than being document markup. In short, YAML is a human readable data serialization language. Though it can be used in many applications where data is being stored or transmitted, it is commonly used for writing configuration files. Many software tools like Travis CI and Docker uses YAML structure to define its configurations.

It is also said to be a super set of JSON syntax ie every JSON document is a valid YAML document as well. Apart from that it also contains some features that lacks in JSON which we’ll be seeing in a few minutes. That’s what makes it so awesome.

YAML uses .yaml as its official extension, though many documents also use .yml. For a short answer as to why these two extension exist, please refer here. Well then. let’s start understanding its basics and write some configuration files for ourselves.

Structure

A YAML file consist of mainly map objects just like dictionary in python or hashes in other languages ie a key-value pair generally defined as following:

---
key: value
...

A key followed by a colon and space, then a value associated with it. Apart from the key value pair, I’ve used the dashes above its definition. Three dashes represent the starting of the YAML file or to be more specific, it separates directives from content. Also there are few dots below the key-value pair. Three dots represent the ending of the YAML file.

Keys/Values

A key in YAML can be of different types like a string, a scalar value, a floating point number etc. Also string don’t need to be quoted using single or double quotation marks. However, they may for the purpose of escaping some characters.

Same goes for values. They can also be of any data type. Apart from defined for keys, a value can be of boolean type and null as well. E.g.

---
'key with quotation marks': 'value in quotation marks'
23: "An integer key"
'a boolean value': true
key with spaces: 3
a null value: null

All above examples are valid map objects as per YAML syntax.

Nesting

Nesting in YAML can be implemented using identation. An identation in YAML is given by giving two or more spaces at the beginning. YAML is very strict with its identation. For e.g.

---
a_nested_object:
  key: value
  another_key: another_value

One more thing, YAML uses only spaces and not tabs.

Sequence

A sequence or list can also be defined in YAML using `-`(dash) as a list marker. For e.g.

---
- item1
- item2
- nested_item:
    - nested_item1
    - nested_item2

Note the space after each list marker.

Multi-line Strings

Multi line strings in YAML can be written either as a ‘literal block’ or a ‘folded block’. The difference between these two is that ‘literal block’ preserves the new lines while ‘folded block’ folds the new line. Literal block uses (|) pipe character whereas folded block uses ‘>’ symbol. Consider an example.

data: |
   There once was a tall man from Ealing
   Who got on a bus to Darjeeling
       It said on the door
       "Please don't sit on the floor"
   So he carefully sat on the ceiling
data: >
   Wrapped text
   will be folded
   into a single
   paragraph

   Blank lines denote
   paragraph breaks

Folded block converts new line to spaces and removes leading whitespaces.

Inline Mapping

YAML being a super-set of JSON allows inline key value pair enclosed in curly braces. For e.g.

name: Prashant Sharma
age: 18

can be written as

# Inline format
{name: Prashant Sharama, age: 18}

Though unlike JSON, keys or values don’t necessarily needs to be quoted. You might have noticed that I have used a comment in above example. A comment can be written by prefixing it using a ‘#’. Same fashion can also be seen for sequences. For e.g.

[milk, pumpkin pie, eggs, juice]

is a valid sequence in YAML. And you know what? Sequences can also be used as a key  or values in YAML syntax.

- {name: Prashant, age: 18}
- name: Shiva
  age: 20
- [name, age]: [Neeraj, 14]

Complex Keys

Just as above in multi-line values, keys can also be complicated in some cases i.e. it can also span multiple lines or can be an indented sequence. To denote a complex key, we use a ‘?’ followed by a space. For e.g.

? |
  This is a key
  that has multiple lines
: and this is its value
? - Prashant Sharma
  - Shiva Saxena
: [1998-11-16, 1997-10-07]

These were some amazing features. Weren’t they? But wait, it has got more in its pocket. Didn’t I tell you earlier, it’s so awesome. Now then let’s explore it a bit more.

Extra Features

  • Anchors

Anchors in YAML allow us to easily duplicate content across our document and then we can use it anywhere throughout the document using references. Anchors are defined as prefixing an ampersand(&) with the anchor name and can be referred by using an asterisk(*) along with the anchor name. For e.g.

anchored_content: &anchor_name This string will appear as the value of two keys.
other_anchor: *anchor_name
  • Merge

Merge symbol(<<) in YAML works along with anchors so that the objects can be inherited. Consider an example for this

- step: &id001                  # defines anchor label &id001
    instrument:      Lasik 2000
    pulseEnergy:     5.4
    pulseDuration:   12
    repetition:      1000
    spotSize:        1mm
- step:
    <<: *id001
    spotSize: 2mm                # redefines just this key, refers rest from &id001

As you may see, we have merged the content by referring the anchor with another key-value pair.

  • Data Typing

We seldom see explicit data typing in YAML files as YAML itself is capable of detecting simple types like integer, string etc. Data types can be explicitly changed by using “!!” symbol followed by the data type name. In YAML, data types can be categorized as core, defined and user-defined.

  • Core data types are those which are usually implemented by all parsers (e.g. integer, string etc.).
  • There are some advanced data types which has been defined in YAML specification but not implemented in every parser such as binary data, comes under defined category.
  • Apart from that YAML also allow us to define user-defined classes, structure or types.

We’ll take a look at all of them with a few examples:

---
a: 540                     # an integer
b: "540"                   # a string, disambiguated by quotes
c: 540.0                   # a float
d: !!float 123             # also a float via explicit data type prefixed by (!!)
e: !!str 123               # a string, disambiguated by explicit type
f: !!str true              # a string via explicit type

picture: !!binary |        # a binary data type
  R0lGODdhDQAIAIAAAAAAANn
  Z2SwAAAAADQAIAAACF4SDGQ
  ar3xxbJ9p0qa7R0YxwzaFME
  1IAADs=

myObject: !myClass { name: Prashant, age: 18 }

YAML also has a set data type. A set data type is nothing but a map object with null values. You could say that it’s a collection of keys only. They can be defined as

a_set:
  ? key1
  ? key2
  ? key3

or: {key1, key2, key3}

References

  1. https://learnxinyminutes.com/docs/yaml/
  2. https://en.wikipedia.org/wiki/YAML
  3. https://yaml.org/

Finally, we have reached the conclusion of this blog post. This was a quite long post, but I guarantee, you won’t need to go elsewhere again if you are working with YAML. Though if you find any mistake or suggestion, do tell through the comment section below. I’ll be glad to hear them. Well then, bid you a happy goodbye. Meet you next time

Till then, be curious and keep learning!

by gutsytechster at March 21, 2019 03:32 PM

Shiva Saxena (shiva)

Hardwork v/s Smartwork

Hi all! When we explain someone about productivity, these 2 words hard-work and smart-work inevitably come into consideration. Did you ever think about it? I mean, what “smart-work” really is? How is it different from “hard work”? Keep the answer in your mind and keep reading.

This complete post is based on a conversation happened with CuriousLearner. I am thankful to him for explaining the real meaning of smart-work.

This post is going to be all about questioning and one of a thought-provoking kind.

Nowadays, if we ask someone that what do you prefer between hard-work and smart-work, the most probably answer we’ll get is smart-work. Then, we may ask him in counter that, what do you think a smart-work really is? or rather ask that person directly, what is smart-work?

What is smart-work?

Going over the web, we may find out different people giving a different definition of this term. For example:

  • Smart work is that you do any work with lesser efforts

  • Accurately, performed work within a short period of time that’s called smart work.

So, do you have your definition with you? Let’s see how correct you know about this term.

You must be knowing about Venn diagrams. Suppose hard-work is a circle (first entity) and smart-work is another one (second entity). What do you think? Do hard-work and smart-work are different?

Think about it for a minute or two and keep the answer with you.

Case 1: Yes they are different

If your answer is yes, then you are saying that these 2 entities are different and the intersection of their Venn diagram is empty. That implies:

  • hard-work  –  smart-work  ==  empty
  • hard-work  –  smart-work  ==  empty

But is it really? Don’t you think there is some similarity among both of them?

Case 2: Somewhere they are the same

If this is your answer, then you are saying that the intersection of hard-work and smart-work is not empty. That implies:

  • hard-work  –  smart-work   ==  something
  • smart-work  –  hard-work  ==  something

If you think this is correct, then please tell me what is that <something>? 🙂

Think about it.

Are you able to find out that <something>? If yes, then you are smart, please feel free to comment your solution in the comments section below. I would love to read it. But if no, then why did you chose that somewhere they are the same? 🙂

Anyhow, what about the least responsive answer as follows.

Case 3: Both overlap each other

Now this is really not a good answer, because this implies that:

  • hard-work  ==  smart-work

Which is surely not possible because no matter what, but at least they are not the same. So, what are they?

Case 4: One is a subset of other

Really? If yes, then please think about who is a subset of what?

  • hard-work is a subset of smart-work? or
  • smart-work is a subset of hard-work?

Think.

By the time, you need to ponder upon one more important thing. Before reading the post did you really know what smart-work is? If no, then it was like you were trying to do something in your daily life that you did not even know.

And if someone doesn’t know what a smart-work is, then how can s/he claim of practicing it. Isn’t it?

Let’s get to the answer now.

Case 5: Smart-work is a subset of Hard-work

They are not different, yet they are not the same either. Smart-work is actually a subset of hard-work. That implies:

  • hard-work  –  smart-work  ==  something
  • smart-work  –  hard-work  ==  empty

Here, this <something> is all we have known as smart-work. So, what actually is this <something>?

The Smart-work

Smart-work is that part of hard-work in which we plan the process to execute in order to accomplish the goal. As simple as that. A better plan, better execution, thus, better results!

Smart-work never meant to reduce the efforts to accomplish a goal, it always meant to get more efficient and better results with the same effort. Because effort should not be compromised. 🙂

Smart-work is not a shortcut to accomplish the goal, rather it is the way to get accomplishment to be more fruitful.

It is like a vector who give a direction to a quantity. The best example is pushing a  brick.

  • 1D: Pushing a brick against a wall won’t give any result even though you are making efforts. But with the same effort if you push the same brick in the opposite direction, then it will move and you’ll get some work done.
  • 2D: Now condition becomes more complex, now you have 2 axes of motion, you would need to plan, in which direction your goal is present so that so you may push the brick in the correct path.
  • 3D: As we’ll keep adding conditions to a task out of many to achieve the goal, we’ll find out that we need to think more and more to get the correct plan.

In real life to achieve a goal there are numerous possibilities, you need to analyze, research, take feedback, then repeat. Once you are done and your plan is ready to go, your smart-work is already done. Now, what left is hard-work and efforts.

Person A and B did the same job but A got better results than B. Who do you think was smart-worker? Yes,  A. Because s/he well planed his/her tasks and became a smart-worker.

Conclusion

If before reading the post you didn’t know what smart-work is? Then perhaps you had an assumption about the subject. But being clear is the best than remain in an assumption. Make sure that you know about the thing that you claim to be doing. 🙂

Thanks for reading!

See you in the post!

Advertisements

by Shiva Saxena at March 21, 2019 12:42 PM

March 17, 2019

Kuntal Majumder (hellozee)

Getting Alight

Would start this one with a quote, Information is free - You have to know People are not - You have to pay Contributors are priceless - You have to be

March 17, 2019 07:31 PM

Getting Alight

Would start this one with a quote, Information is free - You have to know People are not - You have to pay Contributors are priceless - You have to be

by hellozee at disroot.org (hellozee) at March 17, 2019 07:31 PM

Bhavin Gandhi

Creating presentations with Org mode

As I said in the last blog post about Emacs, I use Org mode a lot. It’s the default mode I use for taking notes, tracking tasks etc. The export interface it provides is a really useful feature. There are a lot of packages, which provide ways to export the Org mode file to other formats like Markdown in addition to default supported formats HTML, PDF, text etc. Presenting within Emacs A few months ago I had a talk on Autoscaling in Kubernetes within our company.

by @_bhavin192 (Bhavin Gandhi) at March 17, 2019 02:09 PM

March 16, 2019

Prashant Sharma (gutsytechster)

How to install Docker on Linux Mint Debian Edition(LMDE)

There are very probable chances that you might come across using Docker in your tech lifetime. This time it was I learning this. But before anything, you’d need to install Docker on your machine. And this is the part where I got stuck. So I thought writing a small post for other people who might face the same thing would help.

Like every other person, I had gone to the official documentation looking for installation procedure. However, there were different procedures for installing it and those were based upon your Linux distribution. But I found no specific option for Linux Mint. There were generic options like CentOS, Debian, Ubuntu, and Fedora. So, I wasn’t sure if I should go with the installation procedure for Debian based distro or if there is something else that has to be followed.

After getting help from various people and searching on the web, I successfully installed the docker on my LMDE machine. So let’s get along with some steps

  1. Firstly check if the package docker.io is installed on your system. You can check it using the command
    $ aptitude search docker.io

    If it shows the package then a regular installation of this package would install docker on your machine. You can proceed as

    $ sudo apt-get install docker.io
  2. If the package docker.io is not present on your system, then we’ll go with procedure defined for Debian. However, with a few changes as because we are using LMDE and not Debian itself.

    Since, we are installing the docker for the first time, we need to set up the Docker repository so that we can install and update it from these repositories. So for that run the following commands

    Update the apt package index

    sudo apt-get update

    Install packages to allow apt to use repository over HTTPS

    $ sudo apt-get install \
        apt-transport-https \
        ca-certificates \
        curl \
        gnupg2 \
        software-properties-common

    Now add Docker’s official GPG key

    $ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -

    Now, we will set up the repository for Debian stable release. To check on which Debian base your LMDE is set up, use the following command

    $ cat /etc/os-release

    It would produce the output something similar to

    PRETTY_NAME="LMDE 3 (cindy)"
    NAME="LMDE"
    VERSION_ID="3"
    VERSION="3 (cindy)"
    ID=linuxmint
    ID_LIKE=debian
    HOME_URL="https://www.linuxmint.com/"
    SUPPORT_URL="https://forums.linuxmint.com/"
    BUG_REPORT_URL="http://linuxmint-troubleshooting-guide.readthedocs.io/en/latest/"
    PRIVACY_POLICY_URL="https://www.linuxmint.com/"
    VERSION_CODENAME=cindy
    DEBIAN_CODENAME=stretch

    Here what we have to look for is present at the last of the generated output ie DEBIAN_CODENAME which is stretch in our case. Hence we’ll be using the following command to set up the repository

    sudo add-apt-repository \
       "deb [arch=amd64] https://download.docker.com/linux/debian \
       $ stretch \
       stable"

    As soon as you press enter, the repository will be added in your system. This is the part where I have put a change from the official documentation. In official documentation, it takes the release name for the Debian through the command

    $ lsb_release -cs

    directly. However it would give the code-name for our LMDE release ie cindy in my case. So to avoid this, we manually gave the release name.

    Now, then we have added the repository. Let’s update our apt index packages once more.

    $ sudo apt-get update

    Great! Let’s go ahead with Docker installation using

    $ sudo apt-get install docker-ce docker-ce-cli containerd.io

    Hurrah! we have successfully installed Docker on our LMDE machine and now can use it.

You can check the Docker version you have gotten on your system. Thereby ensure the installation of Docker also.

$ docker --version

Docker version 18.09.3, build 774a1f4

Now let’s run another command

$ docker info

Error!

What? Did it give you an error? Something like this

docker: Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Post http://%2Fvar%2Frun%2Fdocker.sock/v1.35/containers/create: dial unix /var/run/docker.sock: connect: permission denied.

I thought so!  But don’t you worry, it’s just a permission issue and we can deal with it. We need to add the user to the docker group. We can do it using the following command

sudo usermod -a -G docker $USER

And then try if it works. If not, then restart the system and try again. Guess what? It would now show the info about the Docker installed on your system.

That’s it. We have successfully installed Docker and it’s ready to use. Go ahead!

References

  1. https://docs.docker.com/install/linux/docker-ce/debian/
  2. https://docs.docker.com/get-started/

Now then, bidding you goodbye. Meet you next time.

Till then, be curious and keep learning!

by gutsytechster at March 16, 2019 03:21 PM

March 12, 2019

Prashant Sharma (gutsytechster)

Chasing JSON-LD – Part II

JavaScript Object Notation for Linked Data popularly known as JSON-LD is a lightweight syntax to inject Linked Data into JSON so that it can be widely used in web applications and can be parsed by JSON storage engines.

This post is in continuation of the previous post which describes the basics of JSON-LD and will contain more of its features and concepts. I’ll recommend you to go through that for an easy understanding. Well then, let’s get started.

It contains a variety of features that are really helpful for someone working with it. Some of them are described below

  • Versioning

Since JSON-LD has two major version as 1.0 and 1.1, you can define which version to be used for processing your JSON-LD document as per your use case. It can be done by defining the @version key in your @context. For eg.

{
  "@context": {
    "@version": 1.1,
    ...
  },
  ...
}

The first context which defines the @version tells which version should be used for processing your JSON-LD document, unless it is defined explicitly.

  • Default Vocabulary

Very often, many properties and types come from same vocabulary eg schema.org is widely used vocabulary for defining semantics for various terms. JSON-LD’s @vocab keyword provides us the feature to set a common prefix for all the properties and types that do not resolve to any IRI’s. For eg.

{
    "@context": {
      "@vocab": "http://schema.org/"
    },
    "@id": "http://example.org/places#BrewEats",
    "@type": "Restaurant",
    "name": "Brew Eats"   
}

The words Restaurant and name doesn’t resolve to any IRI, hence they would use @vocab’s IRI as prefix. However, there may arise a case in which you wouldn’t want a term to expand using the @vocab’s IRI. For that, the terms should be set to null explicitly. For eg.

{
    "@context": {
       "@vocab": "http://schema.org/",
       "databaseId": null
    },
    "@id": "http://example.org/places#BrewEats",
    "@type": "Restaurant",
    "name": "Brew Eats",
    "databaseId": "23987520"
}

Here, the key databaseID would not resolve to any IRI.

  • Aliasing Keywords

JSON-LD provides a way to give aliases to JSON-LD keywords except for the @context. This feature allows the legacy JSON code to be utilized by JSON-LD by re-using the JSON keys that already exist in the code. But a keyword can’t be aliased to another keyword. Consider an example for this

{
  "@context": {
    "id": "@id",
    "type": "@type",
  },
  "id": "http://example.com/about#gutsytechster",
  "type": "http://xmlns.com/foaf/0.1/Person",
}

Here, the @id and @type keywords has been aliased to id and type respectively and used accordingly.

  • Internationalization

Sometimes we require to annotate a piece of text in certain language. JSON-LD provides the @language keyword to use this feature. For a global language setting, the @language keyword can be defined under @context. For eg.

{
  "@context": {    
     "@language": "ja"
  },
  "name": "花澄",
  "occupation": "科学者"
}

You can also override default values using the expanded term definition as

{
  "@context": {    
     "@language": "ja"
  },
  "name": "花澄",
  "occupation": {
    "@value": "Scientist",
    "@language": "en"
  }
}

I liked this feature the most. It’s just amazing. 🙂

  • Embedding and Referencing

JSON-LD provides a way to use a node object as a property value. What? You ask me what a node object is. Well, a node object is a piece of information that can be uniquely identified within a document and lies outside the JSON-LD context. Let’s consider an example to understand this

[{
    "@context": {
      "@vocab": "http://schema.org/",
      "knows": {"@type": "@id"}
    },
    "name": "Shiva Saxena",
    "@type": "Person",
    "knows": "http://foaf.me/gutsytechster#me"
  }, 
  {
    "@id": "http://foaf.me/gutsytechster#me",
    "@type": "Person",
    "name": "Prashant Sharma"
  }
]

Here the  two node object are defined, one for Shiva Saxena and Other for Prashant Sharma. These two are separated through a comma having properties of its own. Here the node objects are linked through referencing using the knows property. knows property refer to the identifier of the another node object ie Prashant in this case.

Two node objects can also be linked through embedding by using the node objects as property values. It is commonly used to create the parent-child relationship between two nodes. For eg.

{
  "@context": {
    "@vocab": "http://schema.org/"
  },
  "name": "Shiva Saxena",
  "knows": {
    "@id": "http://foaf.me/gutsytechster#me",
    "@type": "Person",
    "name": "Prashant Sharma"
  }
}

Here note that type-coercion for knows property is not required as its value is not a string.

  • Expansion

Expansion in terms of JSON-LD is the process of taking a JSON-LD document and convert into a document in which no @context is required by expanding all the IRIs, types and values defined in @context itself. For eg

{
   "@context": {
      "name": "http://xmlns.com/foaf/0.1/name",
      "homepage": {
        "@id": "http://xmlns.com/foaf/0.1/homepage",
        "@type": "@id"
      }
   },
   "name": "Prashant Sharma",
   "homepage": "https://gutsytechster.wordpress.com/"
}

After expanding, it looks something like this

[
  {
    "http://xmlns.com/foaf/0.1/homepage": [
      {
        "@id": "https://gutsytechster.wordpress.com/"
      }
    ],
    "http://xmlns.com/foaf/0.1/name": [
      {
        "@value": "Prashant Sharma"
      }
    ]
  }
]

And I actually didn’t write that expanded form myself. There is a JSON-LD playground here where you can actually check if it is wrong or right!

  • Compaction

Now, can you guess what compaction might be? Well, it’s just opposite of Expansion. It is a process to apply the context to an already expanded JSON-LD document which results in shortend IRIs, terms or compact IRIs. Taking the same expanded document from above and applying the following context to it.

{
  "@context": {
    "name": "http://xmlns.com/foaf/0.1/name",
    "homepage": {
      "@id": "http://xmlns.com/foaf/0.1/homepage",
      "@type": "@id"
    }
  }
}

We will get our original JSON-LD documents back in the same form. I’ll ask you to try yourself in the JSON-LD playground.

But are you wondering, what’s the need for these Expansion and Compaction algorithms. The answer is pretty simple. Machine understands IRI’s to work with. So it expands the JSON-LD document for itself so that it can process the document and then compact it for developers to return in the same format as it was given.

I guess, we have explored quite a bit about JSON-LD. But it’s still doesn’t contain in-depth use case for each of these features. There are many other features which are available. I leave the rest on your curiosity.

References and Further Reading

  1. https://json-ld.org/spec/latest/json-ld/
  2. https://blog.codeship.com/json-ld-building-meaningful-data-apis/
  3. JSON-LD: Compaction and Expansion

  4. JSON-LD: Core Markup

Well, then meet you in next blog post. Till then,

be curious and keep learning!

by gutsytechster at March 12, 2019 12:42 PM

March 10, 2019

Prashant Sharma (gutsytechster)

Chasing JSON-LD – Part I

Well you might already be aware of this term. But if in any case, it’s a NO then you are at right place my friend. I just started out with it and am already amazed with its working and concept. Let’s start without waiting anymore.

JSON-LD

Starting with its expanded form, it stands for JavaScript Object Notation for Linked Data. Many of you might already be familiar with JSON and that’s something simple. It’s the most often used data format across web for exchanging data. It simplifies the data in the form of key-value pair that is both human-readable as well as parsable by a machine. But how is it relevant at all? Well because that’s the main foundation and inspiration behind the emergence of JSON-LD.
But what’s this Linked Data all about? For a bit detailed description you might want to refer to this post. However, if I put it in simple words then Linked Data is the data that is linked across the web through a semantic meaning. It allows an application to start at one piece of Linked Data and can go through other pieces of it hosted on various different sites on the web. And that’s where JSON-LD enters.

JSON-LD is a light-weight syntax to express Linked Data into JSON format. It’s primary objective is to use Linked Data across web-based services and to store it in JSON-based storage engines.

In other words, it injects meaning into the already available JSON data. But what’s the requirement of it. To understand this, let’s take an example of a simple JSON

{
  "name": "Prashant Sharma",
  "homepage": "https://gutsytechster.wordpress.com/",
  "image": "https://gutsytechster.wordpress.com/images/gutsy.png"
}

It’s a simple example representing few keys and values which are self explained. But machines can’t understand it. It doesn’t know what name is. Either it has to look up to some documentation where its meaning is defined or we have to inject it manually in the code processing this JSON. But just think how better could it be if the meaning could already be present to each term in this document itself. Well, this is possible with the help of Internationalized Resource Identifier – IRIs( an extended version of URIs). We can use the popular schema.org vocabulary to define these terms. So in JSON-LD format, it can be translated as

{
  "http://schema.org/name": "Prashant Sharma",
  "http://schema.org/url": { "@id": "https://gutsytechster.wordpress.com/" },
  "http://schema.org/image": { "@id": "https://gutsytechster.wordpress.com/images/gutsy.png" }
}

For now, don’t focus on @id part. Just see how we have defined each key in terms of IRIs. However, even though its a valid JSON-LD document that is very specific about its data but its too verbose and would be difficult for a developer to work with. What we want, is to be specific as well as concise at the same time. To address this issue, JSON-LD describes the notion of @context.

  • Context

During a communication with one another, the whole conversation takes place in a shared medium – generally called as the “context of the conversation”. A context allow us to use short form without loosing its actual meaning. @context in JSON-LD works the same way. It allow us to map terms to IRIs so that they can be used throughout the document without loosing its actual meaning For eg.

{
  "@context": {
    "name": "http://schema.org/name",
    "image": {
      "@id": "http://schema.org/image", 
      "@type": "@id"
    },
    "homepage": {
      "@id": "http://schema.org/url", 
      "@type": "@id" 
    }
  },
  "name": "Prashant Sharma",
  "homepage": "https://gutsytechster.wordpress.com/",
  "image": "https://gutsytechster.wordpress.com/images/gutsy.png"
}

In above example, we defined the IRI for each terms in the @context and then use them directly throughout the JSON document. The referencing of image and homepage key would be clear to you in a few minutes. Just keep reading 🙂

  • Global Identifiers

Identifiers helps to uniquely identify a piece of information within a document. JSON-LD uses @id to identify such information values. It’s value is an IRI that can be dereferenced. For eg.

{
  "@context": {
    ...
    "name": "http://schema.org/name"
  },
  "@id": "http://me.markus-lanthaler.com/",
  "name": "Markus Lanthaler",
  ...
}

In terms of Linked Data, we call such pieces of information as node. A node can be represented in a linked data graph. Above example contains a node object identified by the IRI http://me.markus-lanthaler.com/. A node object is simply a JSON object if it exists outside of JSON-LD context.

  • IRI

IRIs are the fundamental part of Linked Data as that is how a property or a node is identified. An IRI can be an absolute IRI, a relative IRI or a compact IRI.

  1. An absolute IRI can be dereferenced and looked upon the web.
  2. A relative IRI is used in relation with a @base value which defines the root of IRI.
  3. A compact IRI is something a shorthand form of writing an IRI, it’s defined in prefix:suffix form where the prefix is the root of IRI and suffix is something that is to be added in the end. For eg.
{
  "@context": {   
     "schema": "http://schema.org/"
  },
  "@id": "http://me.markus-lanthaler.com/",
  "schema:name": "Markus Lanthaler",
}

In above example schema:name expands to IRI http://schema.org/name

In JSON-LD, a string is interpreted as an IRI when it’s a value of an @id member ie

{
  ...
  "homepage": { "@id": "http://example.com/" }
  ...
}

Here the string value http://example.com/ will be treated as an IRI as it’s a value of an @id member.

  • Type Coercion

JSON-LD supports the coercion of values to be of a particular data type. Type coercion is determined using @type key in key-value pair. For eg

  "@context": {
    "modified": {
      "@id": "http://purl.org/dc/terms/modified",
      "@type": "http://www.w3.org/2001/XMLSchema#dateTime"
    }
  },  
  "@id": "http://example.com/docs/1",
  "modified": "2010-05-29T14:17:39+02:00",
}

As we can see in above example, we defined the modified key by giving it an @id which identifies it uniquely and a @type which tells that is a dateTime value. The value of modified key type-coerced automatically as it is defined in the @context. We can also set the type into its JSON body as

{
  "@context": {
    "modified": {
      "@id": "http://purl.org/dc/terms/modified"
    }
  },  
  "modified":
    "@value": "2010-05-29T14:17:39+02:00",
    "@type": "http://www.w3.org/2001/XMLSchema#dateTime"
  }  
}

We used @value key to define its value and then set its type to be of dateTime type. In above example, the way key modified is defined is also known as expanded term definition.
And that’s how we defined the IRIs in the context section above where we defined the @type of key to be of @id. Now it would’ve been clear to you.

Well, I guess that should be enough for this time. But let me tell you this is just a basic intro to how JSON-LD looks like or rather I should say it’s just a tip of the iceberg. There is a lot more to it. I’ve covered some more of its feature in its II part Chasing JSON-LD – Part II. Give it a read also.

References

  1. https://json-ld.org/
  2. https://json-ld.org/spec/latest/json-ld/
  3. What is JSON-LD?

  4. JSON-LD: Core Markup

Apart from above references, I’ll ask you to read the document JSON-LD and Why I Hate the Semantic Web written by one of the primary creators of JSON-LD. It describes the things that were involved during the creation of JSON-LD. It is quite an entertaining yet informative article.

Bidding you goodbye. Meet you next time.
Till then be curious and keep learning!

by gutsytechster at March 10, 2019 07:57 PM

Shiva Saxena (shiva)

What is a makefile?

Hello everyone! Ever wanted to write a shell script to automate a task in your project? For example, after cloning the project, do X task and then manipulate Y file, etc. For the same thing before, I used to write shell script files. So that, after cloning the project a user may run those scripts and get the work done. But better than a shell script it is a good idea to add a makefile. Wondering? Keep reading.

What are makefiles and why are they used?

For me, these files are kind of shortcut to write multiple shell scripts in one file separated by labels and then access each script using that label name.

Best example is installing a software.
You clone it,
then execute configure,
that generates a makefile
and then you run make to execute the makefile for complete installation process.

But all projects might not need to use configure to generate a makefile, I mean what about if your project has nothing to do with installations and all? In this case, to automate some shell tasks you may create a static makefile.

Complete explanation of Makefiles is out of the scope of this post. For in-depth reading, please refer: https://www.gnu.org/software/make/manual/make.html

Demo of a makefile

I would take a quick example to illustrate the working. Lets say, we have a project that user can clone and then following tasks need to be done.

  1. Print a message of running” makefile.
  2. Create a new directory, say dump
  3. Create a file in it, say /dump/trash
  4. Add some text to it, say “Going to trash”

I repeat, these tasks could be done using a shell script file like to-do.sh or anything, but its better to use makefile for such post-downloading tasks.

Lets do it!

1. Make a test directory

$ mkdir demo
$ cd demo

2. Create a makefile

$ touch makefile

3. Add some content

create:
	    $(info Makefile is running!)
 	    mkdir dump
	    touch dump/trash
	    echo "Going to trash" >> dump/trash

4. Execute with make

$ make
Makefile is running!
mkdir dump
touch dump/trash
echo "Going to trash" >> dump/trash

make executes commands written in makefile. That’s it. Simple? So let’s create some variations:

5. Add more labels

Add one more label say drop, such that contents of makefile become as follows:

drop:
        $(info Deleting directory dump/)
        rm -rf dump/

create:
	  $(info Makefile is running!)
	  mkdir dump
	  touch dump/trash
	  echo "Going to trash" >> dump/trash

6. Use makefile with versatality

Now, users may enter 2 different commands:

$ make drop
Deleting directory dump/
rm -rf dump/

$ make create
Makefile is running!
mkdir dump
touch dump/trash
echo "Going to trash" >> dump/trash

$ make
Deleting directory dump/
rm -rf dump/

NOTE: Being on the top, drop is acting as a default target which will be called if make command is entered without any specific label.

I am keeping this post short, will keeping on updating it with as much as I’ll learn. Readers may explore more of makefiles as per their interest 🙂
Here is the link: https://www.gnu.org/software/make/manual/make.html

Conclusion

Makefiles are fantastic! I would be using more of it for post-downloading tasks rather than usingshell script files.

Hope you like makefiles!

Thanks for reading! 🙂

See you in the next post!

by Shiva Saxena at March 10, 2019 12:02 PM

March 09, 2019

Prashant Sharma (gutsytechster)

Semantic Web and Linked Data

Hey Wassup!
I came across something amazing known as Semantic Web which is associated with another awesome concept of Linked Data. We’ll try to understand both of these one by one and would feel their awesomeness.

Semantic Web

What’s all the hype about semantic web? We’ll be knowing it in a few minutes. The term ‘Semantic web’ was given by Sir Tim Berners-Lee best known as the inventor of world wide web. He has described the semantic web as a component of “Web 3.0”.

Let’s see what Wikipedia says about it:

The term was coined by Tim Berners-Lee for a web of data (or data web) that can be processed by machines — that is, one in which much of the meaning is machine-readable.

Now to understand this part. In today’s web, most of the data is available in the form of HTML documents. These HTML documents are linked with each other using hyperlinks. Though when we read a document containing any link, we can tell if the link should be dereferenced or not. We can tell the relation of the link with the given document. But machines or computer software-agents can’t. Machines can also read these documents, but other than typically seeking keywords in a page, machines have difficulty extracting any information from these document. Hence, we needed a way such that a machine can process the data available on the web semantically. So that it can understand the meaning behind the information and work in cooperation with people.

Semantic web approaches this idea by publishing these documents in a format specifically designed for data such as Resource Description Framework(RDF), Web Ontology Language(OWL). RDF describes a statement as triple. A triple consist of a subject, predicate and an object. Let’s say a sentence, Mary is parent of Frank. Here Mary can be seen as subject, Frank can be seen as an object and the relation between these two ie Parent can be seen as predicate(relation). It can also be represented in a structure called as Data graph.

Semantic_WebFig 1. Data Graph

Here we have linked two piece of information through a relation and that’s how semantic web relates two pieces of data telling how the data is related. But there is still some problem. Can you point out?
So when we are talking about Mary and Frank, we actually know which Mary or Frank are we talking about as during a conversation, an environment is built and throughout the conversation we take it as a reference. But computers don’t know what’s the reference. Hence we need to specify exactly which Mary is it or which Frank is it. And we do it using Uniform Resource Identifier(URI). It uniquely identifies whatever there is on the web. Therefore, computers identifies each subject or predicate using the URI. Internally, it can be viewed as:

semantic_web2Fig 2. Data Graph in terms of URI

Here every relation is defined and specific. This linking of data through URIs to define semantic meaning is what we call as the Linked Data.

Linked Data

The different pieces of information across the web can be linked to each other by providing a semantic meaning. A data graph may link with another data graph from all over the web and forms the foundation of semantic web. This linking of data is referred to as Linked Data.

Semantic_web3Fig 3. Graph comprises of two data graphs

When working with Linked Data, we come across two possible questions:-

  • What’s the best way to represent the Linked Data?
  • How to link these data together?

We know the answer for 2nd question, you know it, right? Yeah! using relations and URIs. Though if you talk about 1st question, then there can be multiple answers or rather I should say that there is no best way. It’s all about the use case. There are many formats like HTML, JSON, XML CSV, RDFa etc. One of the formats also known as JSON-LD. It stands for JavaScript Object Notation for Linked Data. As JSON is the most often used data format across web. We needed something that could be used just as JSON but as well support the Linked Data. Here comes JSON-LD. Though usage of JSON-LD is a talk for another time.

To summarize, we can say that

Semantic Web is the “new generation” of hyperlinking (Web 3.0, hypermedia) that contain semantic referencing. Linked Data is the data itself that is described by semantic linking. RDF is the “logical” framework for describing the data (metadata). JSON-LD is one of the possible format on which we can define Linked Data.
By Lorenzo

Big companies like Google, Facebook are already utilizing the use of Linked Data. For eg. Google uses Knowledge Graphs and Facebook uses Open Graph Protocol through the use of something popularly known as OG-tags.

Further Reading

  1. http://www.linkeddatatools.com/semantic-web-basics
  2. https://www.quora.com/What-is-the-Semantic-Web
  3. A short introduction to the semantic web
  4. What is Linked Data?

Well then, it’s time to say goodbye. Meet you next time.

Till then be curious and keep learning!

by gutsytechster at March 09, 2019 08:19 PM

March 08, 2019

Manank Patni

Migrating Existing Data From Sqlite to Other Databases

When we begin our learning journey with Django we use the default database i.e., sqlite. It is very much enough for development and learning purposes. But as we make progress with our project and/or want to switch to a high end databases like MySQL,PostgreSQL etc. we will have to transfer our existing data to the new database.

python manage.py dumpdata -o data.json --format json

Change the settings.py file and connect to another database. After that

python manage.py migrate
This would create the tables according to the models we have made in the new database.

python manage.py loaddata data.json
If run successfully. All the data will be transferred to the new database.

by manankpatni at March 08, 2019 06:20 PM

March 06, 2019

Shiva Saxena (shiva)

How to encrypt USB drives with LUKS

Hello readers! Ever thought about the risk of loosing your USB drive having important data? You surely don’t want others to get that data without your permission. Right? In this case, encrypting your USB device is a recommended way to keep a security layer. Keep reading for a simple tutorial to encrypt USB drives with LUKS.

What is LUKS?

The Linux Unified Key Setup or LUKS is a disk-encryption specification created by Clemens Fruhwirth and originally intended for GNU/Linux. Notice the word specification; instead of trying to implement something of its own, LUKS is a standard way of doing drive encryption across tools and distributions. The reference implementation for LUKS operates on GNU/Linux and is based on an enhanced version of cryptsetup, using dm-crypt as the disk encryption backend.

Starting with the tutorial step by step (I am using Ubuntu 18.04 Bionic Beaver)

1. See available filesystems

 df -hl

2. Connect your USB

3. Find out the new connected device

df -hl  # in my case it was /dev/sdb1

4. Unmount the USB

umount /dev/sdb1

5. Wipe filesystem from the USB

Note: check the drive name/path twice before you press enter for any of the commands below. A mistake, might destroy your primary drive, and there is no way to recover the data. So, execute with caution.

sudo wipefs -a /dev/sdb1
/dev/sdb1: 8 bytes were erased at offset 0x00000036 (vfat): 46 41 54 31 36 20 20 20
/dev/sdb1: 1 byte was erased at offset 0x00000000 (vfat): eb
/dev/sdb1: 2 bytes were erased at offset 0x000001fe (vfat): 55 aa

6. Create a LUKS partition

sudo cryptsetup luksFormat /dev/sdb1 

WARNING!
========
This will overwrite data on /dev/sdb1 irrevocably.

Are you sure? (Type uppercase yes): YES
Enter passphrase: 
Verify passphrase:

7. Open the encrypted drive

sudo cryptsetup luksOpen /dev/sdb1 reddrive
Enter passphrase for /dev/sdb1:
ls -l /dev/mapper/reddrive 
lrwxrwxrwx 1 root root 7 Jul 26 13:32 /dev/mapper/reddrive -> ../dm-0

8. Create a filesystem

I am going with EXT4, you may create any other filesystem as well.

sudo mkfs.ext4 /dev/mapper/reddrive -L reddrive
mke2fs 1.42.13 (17-May-2015)
Creating filesystem with 245500 4k blocks and 61440 inodes
Filesystem UUID: 23358260-1760-4b7b-bed5-a2705045e650
Superblock backups stored on blocks: 
32768, 98304, 163840, 229376

Allocating group tables: done 
Writing inode tables: done 
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done

9. Using the encrypted USB

9.1: If you select to mount/unmount your encrypted USB using CLI:

sudo mount /dev/mapper/reddrive /mnt/red
su -c "echo hello > /mnt/red/hello.txt"
  Password:
  ls -l /mnt/red
  total 20
  -rw-rw-r--. 1 root root     6 Jul 17 10:26 hello.txt
  drwx------. 2 root root 16384 Jul 17 10:21 lost+found

sudo umount /mnt/red
sudo cryptsetup luksClose reddrive

9.2: If you just use GUI to use the encrypted USB as I do then a similar dialog will appear:

luks

Just give your passphrase, save your data in it and eject safely. As simple as that!

Resources

Conclusion

LUKS is wonderful, I recommend using it not just to keep your sensitive data secured but also in general.

Hope you are going to make use of LUKS and suggest your friends as well.

Thanks for reading!
See you in the next post 🙂

 

 

by Shiva Saxena at March 06, 2019 10:13 AM

March 03, 2019

Shiva Saxena (shiva)

Testing with unittest.mock

Hello! Just 10 days back, there was a time when I tweet this.

While I always found it difficult, some people say writing mock test is super easy. I think it’s time for me to code more modular.

After a week of making the tweet, I set myself to read the official documentation with patience. The more I read, the more I started liking the tool. In the end, I understand people were right in saying that “mock tests are easy”. Below is a quick overview of what I could understand of writing mock test cases.

What is unittest.mock?

In short:

unittest.mock is a library for testing in Python. It allows you to replace parts of your system under test with mock objects and make assertions about how they have been used.

mock objects here refers to kind of dummy objects. Whose behavior is under your control.

With example:

Let’s say you are making a web app in integration with another ticket managing web app, you are using its API in your code for a specific purpose, say buying tickets and gettings back the ticket id. So you write a code that sends a request to ticket managing app and gets the ticket id in response. But, here is the twist!

The ticket managing web app is kinda money minded and don’t want to entertain your request free of cost. So, you need to pay a little amount every time you make a request. Okay? Now you have written the code, you tested in 2-3 times and paid a small amount to the other app. That doesn’t matter much. But if you are a good developer, then you must have written automated test cases to test the behavior of your app. And every time you run the test suit, a couple of requests are made that costs you again, Another-Small-Amount.

In vigorous development, you run test suit countless times, and if every time it is going to charge you a small amount it is well understood that its gonna give your wallet a really big deal.

Here comes, mock test. Using which you may kind of deactivate the functions of that web app API and assume their response. So you don’t need to send a real request to other application and this way you save your money 🙂

Use cases

You write mock tests:

  • while using 3rd party APIs. – As you want to test YOUR API, not their.
  • while your code makes requests on a remote resource using the internet – As you might want to run your test cases even at places without internet.
  • while sending request to an async tool – like celery beat, suppose the beat is set to 5 minutes, so it will run only after every 5 minutes, but its not a good idea to keep your test suit on hold till the next beat, so you just test calling the celery task not the actual running of that task.
  • while you want to set explicitly the return value of a function – As you might want to test your feature for multiple return values of a function.
  • while you want to explicitly raise an exception while a particular function gets called – As you might want to test the working of your code in a situation of getting encountered with an exception.

Example with mock.patch

There are lots of functions available in unittest.mock. For me, the patch is found to be the most useful. That’s why, I am showing just the patch function in this example that too very briefly, readers may explore more, as per their interest.

case 1: As a function decorator

File: 1

# file at project/app1/foo_file

def foo(arg1, arg2):
    return arg1 + arg2

File: 2

# file at project/app2/bar_file
from project.app1.file1 import foo

def bar():
    try:
        return foo(1, 1)
    except NameError:
        return "Error"

File: 3

# file at project/app3/test_file
from unittest import mock

@mock.patch('project.app2.bar_file.foo')
def test_bar(mock_foo):
    # Here foo() is now mocked in bar_file, and this mocked function
    # is passed to kwarg: mock_foo for further references.

    bar()
    # Calling the function under test
    
    # testing if mock function was called
    assert mock_foo.assert_called_with(1, 1) is None
    assert mock_foo.assert_called_once_with(1, 1) is None
    
    # manipulating the return value of mock function
    mock_foo.return_value = 5
    assert bar() == 5

    # manipulating the mock function to raise exception where it gets called
    mock_foo.side_effect = NameError('reason')
    assert bar() == "Error"

NOTE: Where to patch?
We need to patch the function where it is gettings used, not where it is defined. In the example above foo is defined in foo_file but used in bar_file thus we mocked the foo function in bar_file(see argument passed to @patch()).

case 2: As a context manager

In the example above, we patched a function foo in a complete function, but if we don’t want that instead, we just to mock a function for a limited scope in a test function. Here it is how to.

File: 3    (File: 1 and File: 2 remains same)

# file at project/app3/test_file
from unittest import mock

def test_bar():
  with mock.patch('project.app2.bar_file.foo') as mock_foo:
      # Here foo() is now mocked in bar_file, and this mocked function
      # can now be referenced using mock_foo.
      
      mock_foo.return_value = 5
      assert bar() == 5
      # Inside 'with' scope: mocked behavior present

  assert bar() == 2
  # Outside 'with' scope mocked behavior absent

Explore more unittest.mock

Conclusion

I see unittest.mock as a really useful tool for all the use cases listed above. I hope you don’t find mock testing difficult, but if you do, then I seriously suggest to read the official docs, they are just lovely and shows the power of documentation!

Thanks for reading! See you in the next post 🙂

by Shiva Saxena at March 03, 2019 06:40 PM

February 26, 2019

Shiva Saxena (shiva)

Git stash is really cool

Ever messed up with git repositories (who hasn’t)? Git stash may turn out to be a lifesaver. It has some really cool options. Let’s check them out!

Get acquainted with stash methodology

Stash simply does a clearing job. You want to pull changes to your local repo and current changes are blocking the pull. You may use git stash to send them in the background that later on can be popped out. As simple as that.

Stash options:

  • push
  • list
  • show
  • pop
  • apply
  • branch
  • clear
  • drop

Their brief definitions are available in git-stash manual. Feel free to give a quick look at man git-stash. Following are some example based usage in brief.

git stash push

Or you can say just git stash if you don’t want to use any other option with it. It simply pushes your current changes (both staged or not staged) to stash area.

mkdir test_stash
cd test_stash
git init
touch cool_file
git add cool_file
git commit -m "Add cool_file"
echo "Hello stash" >> cool_file

Now, these changes are not staged. Try:

git status
  modified:    cool_stash
git stash
  Saved working directory and index state WIP on master: 2855c2a Add cool_file

It’s like you made some changes and before making the commit you want to try a different approach. So, you just stashed your current changes and try out a different approach if you like it, then keep it. If you don’t like it, then git reset --hard and we can bring back our old changes with pop option shown below.

git stash list

Let’s do one more stash entry first.

  echo "hello again" >> cool_stash
git stash
  Saved working directory and index state WIP on master: 2855c2a Add cool_stash

Now if you have done a couple of stash entries, then you may have a look on the stash list with:

git stash list
  stash@{0}: WIP on master: 2855c2a Add cool_stash
  stash@{1}: WIP on master: 2855c2a Add cool_stash

Here are our 2 stash entries.

stash@{0} is new and thus on the top of the stack.
stash@{1} is the old one.

But these are not specific. I want to know what changes are stored in each stash entry. Let’s go ahead 🙂

git stash show

To see changes (diff) stored in any stash entry use

git stash show -p stash@{0}
  diff --git a/cool_stash b/cool_stash
  index e69de29..13ab7f7 100644
  --- a/cool_stash
  +++ b/cool_stash
  @@ -0,0 +1 @@
  +hello again

For the recent stash, you may omit stash@{0}.

git stash pop

Okay, now I want to take out the stashed changes in stash@{1}. It’s simple.

git stash pop stash@{1}

Note that

  • popped changes are unstaged
  • popped stash is no longer present in stash area. Verify it with git stash list.

What if we want to pop out a stash entry without removing it from the stash area? Here comes the next option.

git stash apply

First, send the current changes back to stash and then try apply.

git stash
  Saved working directory and index state WIP on master: 2855c2a Add cool_stash
git stash list
  stash@{0}: WIP on master: 2855c2a Add cool_stash
  stash@{1}: WIP on master: 2855c2a Add cool_stash
git stash apply
  modified:    cool_stash
git stash list
  stash@{0}: WIP on master: 2855c2a Add cool_stash
  stash@{1}: WIP on master: 2855c2a Add cool_stash

Note that

  • applied changes are unstaged
  • applied stash is present in stash area. Verified it with git stash list.

git stash branch

In case you want to pop out a stash entry but on a new branch, then you make the use of this option. Example

git status
  modified:    cool_stash
git stash
  Saved working directory and index state WIP on master: 2855c2a Add cool_stash
git stash list
  stash@{0}: WIP on master: 2855c2a Add cool_stash
  stash@{1}: WIP on master: 2855c2a Add cool_stash
  stash@{2}: WIP on master: 2855c2a Add cool_stash
git stash branch side_branch stash@{0}
  Switched to a new branch 'side_branch'
  On branch side_branch
  Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git checkout -- <file>..." to discard changes in working directory)
  
  modified: cool_stash

  no changes added to commit (use "git add" and/or "git commit -a")
  Dropped stash@{0} (919c486edb34e276383eb1682db0c29ac7eb9623)

Note:

  • After branching successfully that applied stash is dropped out.
  • Branching will be successful, but if the applied changes are creating conflict, then the applied stash is not dropped out.

Useful as per the manual page:

This is useful if the branch on which you ran git stash push has changed enough that git stash apply may fails due to conflicts. Since the stash entry is applied on top of the commit that was HEAD at the time git stash was run, it restores the originally
stashed state with no conflicts.

I understand that if you fear to apply stash changes to the current HEAD because you believe that it may create conflict, then you may use this option to stash out on a test branch.

git stash clear

How many stash entries do you have now in your master? It may be any number, to remove all of them in one shot, clear the stash area with this option.

git stash list
  stash@{0}: WIP on master: 2855c2a Add cool_stash
  stash@{1}: WIP on master: 2855c2a Add cool_stash
  stash@{2}: WIP on master: 2855c2a Add cool_stash
git stash clear
git stash list

Moving on!

git stash drop

It simply drops (delete) stash entry. Let’s say.

echo "stash stash stash" >> cool_stash
git status
  modified: cool_stash
git stash
  Saved working directory and index state WIP on master: 2855c2a Add cool_stash
git stash list
  stash@{0}: WIP on master: 2855c2a Add cool_stash
echo "git git git" >> cool_stash
git stash
  Saved working directory and index state WIP on master: 2855c2a Add cool_stash
git stash list
  stash@{0}: WIP on master: 2855c2a Add cool_stash
  stash@{1}: WIP on master: 2855c2a Add cool_stash
git stash drop stash@{1}
  Dropped stash@{0} (bd2b6c6d98742ca504677cf36ddb6bc93d535654)
git stash list
  stash@{0}: WIP on master: 2855c2a Add cool_stash

Some less popular options in usage for git stash are also available:

  • git stash save
  • git stash create
  • git stash store

Conclusion

It is fun to use git stash at times. I remember I used it in bringing a deleted file back. I was working on a git repo and had some current changes some were staged and some were unstaged and accidentally I deleted a useful file. I wanted to get that file back and I knew it that I can do it with git reset --hard to revert all local changes. But at the same time, I didn’t want to loose my current changes. What I did, I stashed my current changes (except the change of deleting the file) and did git reset --hardI got my file back and then I poped out stashed changes. Simple? Yeah, when you know it.

I used interactive git stash in that case with

git stash push --patch

Where I could exactly choose what hunk of changes to be stashed (as I didn’t want to stash the change of deleting the file).

Hope you liked it. See you in the next post! o/

Thanks for reading! 🙂

 

 

by Shiva Saxena at February 26, 2019 04:46 PM

February 25, 2019

Shiva Saxena (shiva)

My experience in HackVSIT-2k19

Hello everyone! Recently, I went to a Hackathon (HackVSIT) held at Vivekanand Institute of Professional Studies. Following are some memorable glimpses of the event.

My friend InquiridorTechie has already described a quick post describing the event nicely in a timeline. So I decided to write just about my top 5 favourite moments 🙂 Let’s hit the countdown.

#5 – Food and snacks

In short they were really delicious! We had a lunch, evening snacks, dinner, midnight snacks, breakfast and we enjoyed each one of them. Believe me, that samosa souce was exceptional!

I don’t know how to explain food times more than that. Moving ahead.

#4 – Ideation and naming convention

So, we were in hackathon opening ceremony and the hack was about to begin but surprisingly we had nothing in mind to work on. It is not that we couldn’t get any new idea to work on but it was more like, that we wanted to work on a real problem solving software rather than assuming a hypothetical problem and solving it abstractly.

We all were scratching our heads to come out with a useful idea. And after the hardwork of around 1 hour we finally come up with a new yet interesting problem solving idea (at least we, think it to be nice)

The idea being “reducing the hardwork of developers to make their custom dotfile setup by providing them a command line application that can do the work for them.”

In first thought it appears to be simply useful. You just need to run our CLI app and your dotfile setup is ready to upload anywhere. Hurray! Isn’t that great?

Now, the second thing was to give it a “name”. I always like this part. Soon we started to come up with new names and kept rejected each one of them. Later keeping in mind some useful tools like ‘kiwi’, ‘celery’, ‘redis’, etc we found out that it is a good idea take the name of any eatable. And first we considered donutbut it was already taken. After going through a couple of dishes we came up with Oliv(removed ‘e’ from olive). We all liked it and went ahead with it.

#3 – Gotchas with git and pip

The more you work with git, the more tricks and rules you learn. So, we were working on our idea and were using git/GitHub to organize things. Since we were commiting and pushing to a same branch so couple of time we messed things up.

I remember InquiridorTechie once commited in local repo without making the pull first. There we got reminded with the trick to undo a local commit that is:

$ git reset --soft HEAD~1

But the real gotha is this link: https://stackoverflow.com/questions/24568936/what-is-difference-between-git-reset-hard-head1-and-git-reset-soft-head, didn’t know about that mixed and keepversions of git reset.

You know the full form of pip? I didn’t know that but wikipedia says

pip is a recursive acronym that can stand for either “Pip Installs Packages” or “Pip Installs Python”.
Alternatively, pip stands for “preferred installer program”.

Also, any time I needed to know the version of a package installed via pip, what I used to do is:

  • Run python CLI
  • help(‘package’)

But GutsyTechster informed me about show option of pipnow I do like: pip show package🙂

#2 – Final and only evaluation

It was awesome! I never really pitched any idea and code before, like this to evaluators. We explained to evaluators our idea, problem statement, how it solves the problem and what tech stack we have used.

We showed them the working of the prototype we built during the 24 hrs hackathon. And we were happy to know that evaluator liked our idea as they said so. Moving ahead to the final and the best part.

#1 – Evaluators are coming!

OMG! What a moment it was! I and InquiridorTechie didn’t have any experience of having an evaluator that analyze code/idea/implementation and who’s knows what questions they may come up.

It was around 7pm when teams were waiting for evening snacks and it was around 8:30pm that we finally got it 😛 doesn’t matter for me, I wasn’t hungry. What matters to me is, the announcement just after the evening snacks that “Evaluators are coming within 10 minutes”. Oh really? Usual butterflies in my stomach, haha! How they are going to evaluate? My mind was rushing!

Soon, we assigned all of us some task to get our idea ready with implementation level 1, so that at least we can show something to evaluators. Though evaluators didn’t come up even all night, but that 1-hour rush was simply amazing. I think we did equivalent work in that 1 hour that we had been doing the whole day.

Conclusion

Overall, it was a great experience! I would love to join HackVSIT in the next year. I would like to thank each one them to organize the great event.

Thanks for reading!

by Shiva Saxena at February 25, 2019 04:41 PM

February 23, 2019

Jagannathan Tiruvallur Eachambadi

February 22, 2019

Kuntal Majumder (hellozee)

Windows : A true nightmare

I never expected such a dreadful day would come such that I have to install Windows, cause I didn’t have enough money. And yes, you read it right, nothing is wrong in the previous statement.

by hellozee at disroot.org (hellozee) at February 22, 2019 10:01 AM

Windows : A true nightmare

I never expected such a dreadful day would come such that I have to install Windows, cause I didn’t have enough money. And yes, you read it right, nothing is wrong in the previous statement.

February 22, 2019 10:01 AM

February 20, 2019

Neeraj Kumar Arya (InquiridorTechie)

HackVSIT 2k19

vips

Hello, Friends

It’s have been a long time that I haven’t post any blog. I was learning new stuff and busy in my college curriculum. Finally, I got time to write something new. So, now here I will share my first experience in the hackathon.

I am sure most of you know what is Hackathon? Although I am briefly explaining it.

The word “hackathon” is a portmanteau of the words “hack” and “marathon”, where “hack” is used in the sense of exploratory programming, not its alternate meaning as a reference to computer security.

A hackathon (also known as a hack day, hackfest or codefest) is a design sprint-like event in which computer programmers and others involved in software development, including graphic designers, interface designers, project managers, and others, often including subject-matter-experts, collaborate intensively on software projects. The goal of a hackathon is to create usable software or hardware with the goal of creating a functioning product by the end of the event.

I was very keen to participate in the hackathon but due to lack of knowledge and confidence, I always took my step back. I had heard from others that in the hackathon you have to develop a software or something to solve a particular problem. I didn’t have any knowledge of development. Now from this year I decided to participate in the hackathon and explore my skills. HackVSIT gave me this opportunity. We four friends decided to register ourselves for the event. And luckily we got a confirmation email 2 days before the hackathon’s date.

The day before the hackathon.

We all were excited to participate. I left my relative’s wedding for this hackathon. But unfortunately one of our friend had to leave the city for some reasons. He talked with Hackathon organizers and they allowed us to participate with 3 members. We rest were thinking about which problem should we have to work. We had 8 tracks for this hackathon.

  1. Human Resources
  2. Blockchain
  3. Mental Health
  4. Fintech
  5. Tools for Developers and Designers
  6. Smart city
  7. IOT,
  8. Computer Vision.

We discussed and select Finetech, Tools for developers and Human resource are good to work upon. Rest tracks were also good but we were not good at them. As I told you this was my first hackathon, Prashant and I talked about 1 hour that night. He told me what things should I bring with myself apart that mention in the FAQs. We discussed on the real-life problems and planned our tomorrows work. And then we go to take a long nap because the next day we had to wake up 24 hours.

Hack Day

I woke up at 6:30 in the morning, packed my things and left home at 7:40. I reached at 9:00 am at Haiderpur metro station, waited for my two friends then we all 3 rushed to the college (venue). After registration, we got room no. where we hack the whole night. But first, we enjoyed opening ceremony of the hackathon in the auditorium. VIPS introduced their chief guest, mentors, evaluators, sponsors etc. Meanwhile, Shiva (one of our team member) got a cracking idea to work upon and we all agreed to that idea. The idea was simple but unique. As our project is Open source you can check our idea here on GitHub.

The organizers of VIPS was very helpful and calm. The arrangement made by them was nice. They provide us a delicious lunch. I was enjoying every moment and sharing our pics on twitter. After lunch, we began to work with complete devotion. Until dinner, we had completed 40% of the work. Then dinner, then work this is all about the hackathon where you get dirty your hand with the code and you don’t care about anything. The same happened with me I forgot to sleep, eat. We hack whole night, wrote shell scripting in python, share our midnight snacks pic on twitter.

hack

End of the Hackathon

Evaluator came in the morning they liked our project and congratulated us. However, we were not selected in the top 12 teams but our experience was fantastic. In the end, we got a participation certificate and stickers. One more thing which I experienced is that after this hackathon I got a sweet sleep.

I would like to thanks VIPS’s team for organizing such a great event. I feel honored to get selected for this hackathon. In this hackathon, I learned we should push our limits, think out of the box then only we can achieve something. I will continuously take part in the upcoming hackathons of this year and explore my skills and knowledge.

References

1. https://en.wikipedia.org/wiki/Hackathon

I hope you liked this blog!

See you soon!

Have a good day!

InquiridorTechie.

by inquiridortechie at February 20, 2019 05:28 PM

February 18, 2019

Bhavin Gandhi

Entering The Church of Emacs

From the title you may think this is another post or debate on <your favorite editor here> vs GNU Emacs. I will be talking about how I started using Emacs and why it is the best tool I have ever used. How it started During dgplug summer-training 2018, we had series of sessions on GNU Emacs. Thanks to mbuf, the whole series helped me a lot to get started with Emacs and now I use it for all my editing.

by @_bhavin192 (Bhavin Gandhi) at February 18, 2019 04:31 PM

February 17, 2019

Priyanka Sharma

Journey with nature: The Guava Leaf

g.jpg

Seeing and observing are two different things. We are all familier with guavas and we must have eaten them too. But have you ever tried to observe anything that what benefits they may give to you ? I have two guava trees in my garden since past 10 years, I have seen them growing up from a sapling to a tree but never had observed them, I came across many great beneficial things, guava leaves can provide.

Guava is known well as tropical fruit which rich in nutrients throughout the world. People loves to eat it as it has sweet and juicy flavor. Not only consumed as food, guaya also being used in medicinal purpose. The fruit, leaf and other parts of guava has been proved may give benefits to human health. Scientific studies have documented the healthful qualities of the superfruit’s leaves, and you can see what they’ve found for a variety of conditions below:

1. Diarrhea

  • Guava leaf in medicinal purpose is mostly used to treat diarrhea. Diarrhea is a condition where the colon cant absorb water due to bacterial infection of Staphylococcus aureus. Study reported that guava leaf has strong anti-bacterial compound such as tannins and essential oil which very effective to fight against S. Aureus infection and inhibit those bacteria growth.
  • The way to use guava leaves to cure diarrhea is by taking 6 guava leaves, then wash it. Then, boil it through and squeeze the leaves. Next is you get the leaves extract. Then, just drink it straight once in two days until you feel much better.
  • People suffering from diarrhea who drink guava leaf tea may experience less abdominal pain, fewer and less watery stools, and a quicker recovery, according to Drugs.com. Add the leaves and root of guava to a cup of boiling water, strain the water and consume it on an empty stomach for quick relief.da.jpg

2. Lowers Cholesterol

  • It is surprising that guava leaf can reduce the level of cholesterol in bloodwhich can cause many health problems. Studies reported that guava leaf contains active phytochemical compounds such as gallic acid, cathechin and epicathecin which can inhibit pancreatic cholesterol esterase which slightly reduce cholesterol level.
  • LDL or Low-density lipoprotein are one of the five major groups of lipoproteins which transport all fat molecules throughout your body. It is the excess of this class of cholesterol that may cause a host of health disorders particularly that of heart. According to an article published in Nutrition and Metabolism, study participants who drank guava leaf tea had lower cholesterol levels after eight weeks.

ch.jpg

3. Manages Diabetes

  • Japan has approved guava leaves tea as one of the foods for specified health uses to help with the prevention and treatment of diabetes. The compounds in the tea help regulate blood sugar levels after meals, by inhibiting the absorption of two types of sugars – sucrose and maltose. According to an article published in Nutrition and Metabolism, guava leaf tea inhibits several different enzymes that convert carbohydrate in the digestive tract into glucose, potentially slowing its uptake into your blood.
  • Cathechin in guava leaf is not only can burn the fat but it also can control the blood glucose level or in other name it has hypoglycemic effect to the body. This may help to prevent the development of diabetes especially type 2 that also become a consequent along with developing obesity.

diab.jpg

4. Promotes Weight Loss

  • Looking to shed the extra inches around your belly? Sip into guava leaf tea. Guava leaves help prevent complex carbs from turning into sugars, promoting rapid weight loss. Drink guava leaves tea or juice regularly to reap the benefits.

wl.jpg

5. Fights Cancer

  • Due to high quantities of the antioxidant lycopene, various studies have revealed that lycopene plays a significant role in lowering the risk of cancer.
  • Many studies have been conducted to found the components and benefits of guava leaf. One of best benefits that you may found in guava leaf is anti-cancer activity. It has been proved that guava leaf can reduce the risk of several types of cancer such as gastric, breast, oral and prostate cancer. This benefits performed by the antioxidant contains in guava leaf such as quercetin, lycopene and Vitamin C. Those components can induce the apoptosis or self-killing activity of cancer cells according to a study which published in 2011.

ca.jpg

How to Make Guava Leaves Tea

To get all those benefits you can start to consume it by making guava leaves as tea. Below is several steps to make guava tea :

  1. Dry some young guava leaves
  2. After they got dry, crush them into powder
  3. Use one tablespoon of guava leaves and add it to one cup of hot water
  4. Let it brew for 5 minutes then you can strain it
  5. Drink guava leaves tea regularly, once a day

Those are all benefits that you may get from guava leaves. You can consider it as natural remedy which has many good effect to your body and of course low cost medicine which you can get almost anywhere.

tea.jpg

by priyanka8121 at February 17, 2019 04:48 PM

Jagannathan Tiruvallur Eachambadi

(Neo)vim Macro to Create Numbered Lists

I usually encounter this when saving notes about list of items that are not numbered but are generally better off being itemized. Since this is such a common scenario I did find a couple of posts1 2 that explained the method but they had edge cases which were not handled properly.

Say you want to note down a shopping list and then decide to number it later,

Soy milk
Carrots
Tomatoes
Pasta

Start off by numbering the first line and then move the cursor to the second line. Then, the steps are

  1. Start recording the macro into a register, say a, by using qa.
  2. Press k to go one line up.
  3. yW to copy one big word, in this case “1. ”.
  4. Then j to come one line down and | to go to the start of the line.
  5. Use [p to paste before and | to go the beginning.
  6. To increment, Ctrl+A and then j and | to set it up for subsequent runs.

To run the macro, go to the next line and execute @a. For repeating it 3 times, you can use 3@a.

by Jagannathan Tiruvallur Eachambadi (jagannathante@gmail.com) at February 17, 2019 12:05 PM

February 16, 2019

Priyanka Sharma

Networking: Heart of World !

Ask ten different people what networking is and you may get as many as ten different answers. A person’s definition of networking probably depends upon their use of this important personal and professional activity. Whether you network to make new friends, find a new job, develop you current career, explore new career options, obtain referrals or sales leads, or simply to broaden you professional horizons, it is important to focus on networking as an exchange of information, contacts or experience.

Networking is one of the most fascinating thing ever. Here, I am writing and at my place and you are reading from your place. This is cool. But have you ever wonder how the world’s scenario would be without this heart ? This is beyond my imagination. Being a Computer Science student, I spend most of the time sitting in front of the laptop doing coding stuffs, programming, web development and much more. So, slowly I have developed interest in computer networking.

How the Computer network works really ? How the network has developed to this vast extent ? What will happen to the world without networking ? These are some of the questions that is fascinating me more and more that it can’t stop me from writing this.

Computer networking is the practice of interfacing two or more computing devices with each other for the purpose of sharing data. Computer Networks are built with a combination of hardware and software.net

Clients and Servers

An important relationship on networks is that of the server and the client. A server is a computer that holds content and services such as a website, a media file, or a chat application. A good example of a server is the computer that holds the website for Google’s search page: http://www.google.com. The server holds that page, and sends it out when requested.

A client is a different computer, such as your laptop or cell phone, that requests to view, download, or use the content. The client can connect over a network to exchange information. For instance, when you request Google’s search page with your web browser, your computer is the client.

MAC address

Imagine MAC addresses like people addresses or phone numbers. You can’t have two persons have the same MAC Address. The thing about MAC address is that it’s only used in LANs. It’s an address that is only usable inside a local network. You can’t send data to a device in a different network using it’s MAC as destination, but you can send data to devices in your local networks using MAC address as identifier.

When a device is manufactured, it’s chip has provided an address called MAC address. A media access control address of a device is a unique identifier assigned to a network interface controller for communications at the data link layer of a network segment. mac

Traditional MAC addresses are 12-digit (6 bytes or 48 bits hexadecimal numbers. By convention, they are usually written in one of the following three formats:

  • MM:MM:MM:SS:SS:SS
  • MM-MM-MM-SS-SS-SS
  • MMM.MMM.SSS.SSS

IP Address

For a computer to communicate with another computer it needs an IP address, and it must be unique. If there is another computer on the same network with the same IP there will be an IP address conflict and both computers will lose network capability until this is resolved.

The IP address consists of 4 numbers separated by decimals. The IP address itself is separated into a network address and a host address. This means that one part of the IP address identifies the computer network ID and the other part identifies the host ID.
As an example, an IP address of 192.168.0.45 is known as a class C address (more on classes later). A class C networks uses the first 3 numbers to identify the network and the last number to identify the host. So, the network id would be 192.168.0 and the host id would be 45. Computers can only communicate with other computers on the same network id. In other words networking will work between 2 computers with IPs 192.168.0.231 and 192.168.0.45 respectively but neither can communicate with 192.168.1.231 because it is part of the 192.168.1 network.ip

                              IP address = Network ID part + Host ID part

An IP address has two components, the network address and the host address. A subnet mask separates the IP address into the network and host addresses (<network><host>). Subnetting further divides the host part of an IP address into a subnet and host address (<network><subnet><host>) if additional subnetwork is needed.

Sub-Classes of IP addressing:

bi.jpg

The 32 bit IP address is divided into five sub-classes. These are:

  • Class A
  • Class B
  • Class C
  • Class D
  • Class E

Each of these classes has a valid range of IP addresses. Classes D and E are reserved for multicast and experimental purposes respectively. The order of bits in the first octet determine the classes of IP address. The class of IP address is used to determine the bits used for network ID and host ID and the number of total networks and hosts possible in that particular class. Each ISP or network administrator assigns IP address to each device that is connected to its network. IPv4 address is divided into two parts:

  • Network ID
  • Host ID

Class A:

IP address belonging to class A are assigned to the networks that contain a large number of hosts.

  • The network ID is 8 bits long.
  • The host ID is 24 bits long.

The higher order bit of the first octet in class A is always set to 0. The remaining 7 bits in first octet are used to determine network ID. The 24 bits of host ID are used to determine the host in any network. The default sub-net mask for class A is 255.x.x.x. Therefore, class A has a total of:

  • 2^7= 128 network ID
  • 2^24 – 2 = 16,777,214 host ID

ca.jpg

Class B:

IP address belonging to class B are assigned to the networks that ranges from medium-sized to large-sized networks.

  • The network ID is 16 bits long.
  • The host ID is 16 bits long.

The higher order bits of the first octet of IP addresses of class B are always set to 10. The remaining 14 bits are used to determine network ID. The 16 bits of host ID is used to determine the host in any network. The default sub-net mask for class B is 255.255.x.x. Class B has a total of:

  • 2^14 = 16384 network address
  • 2^16 – 2 = 65534 host address

cb.jpg

Class C:

IP address belonging to class C are assigned to small-sized networks.

  • The network ID is 24 bits long.
  • The host ID is 8 bits long.

The higher order bits of the first octet of IP addresses of class C are always set to 110. The remaining 21 bits are used to determine network ID. The 8 bits of host ID is used to determine the host in any network. The default sub-net mask for class C is 255.255.255.x. Class C has a total of:

  • 2^21 = 2097152 network address
  • 2^8 – 2 = 254 host address

cc.jpg

Class D:

IP address belonging to class D are reserved for multi-casting. The higher order bits of the first octet of IP addresses belonging to class D are always set to 1110. The remaining bits are for the address that interested hosts recognize.

Class D does not posses any sub-net mask. IP addresses belonging to class D ranges from 224.0.0.0 – 239.255.255.255.

cd.jpg

Class E:

IP addresses belonging to class E are reserved for experimental and research purposes. IP addresses of class E ranges from 240.0.0.0 – 255.255.255.254. This class doesn’t have any sub-net mask. The higher order bits of first octet of class E are always set to 1111.

ce.jpg

Subnet:

Maintaining a smaller network is easy and we can provide security of some particular network from other network by dividing a network into many smaller networks. This is called subnet.

sub

Subnet Mask:

A subnet mask is a mask used to determine what subnet an IP address belongs to. An IP address has two components, the network address and the host address.

It is called a subnet mask because it is used to identify network address of an IP address by perfoming a bitwise AND operation on the netmask. A Subnet mask is a 32-bit number that masks an IP address, and divides the IP address into network address and host address.

Subnet Mask is made by setting network bits to all “1”s and setting host bits to all “0”s. Within a given network, two host addresses are reserved for special purpose, and cannot be assigned to hosts. The “0” address is assigned a network address and “255” is assigned to a broadcast address, and they cannot be assigned to hosts.

Advantage of Subnet Mask:

Given an IP address, if it is bitwise AND with the Subnet Mask, then we will get the network ID of the network to which this particular IP address belongs to.

IP address: 200.1.2.130

This means that a packet is to be sent to host 200.1.2.130 and we have to find out what is the network in which this particular host belongs to.

  • Convert the IP address to 0’s and 1’s bits: 200.1.2.130 converted to:

11001000.00000001.00000010.10000010

  • Let, Subnet Mask is: 255.255.255.192

11111111.11111111.11111111.11000000

  • Performing bitwise AND-

11001000.00000001.00000010.10000010

11111111.11111111.11111111.11000000  

We will get: 11001000.00000001.00000010.10000000

i.e. 200.1.2.128

Hence, 200.1.2.130 belongs to the network 200.1.2.128

 

                                                                                        

 

by priyanka8121 at February 16, 2019 04:24 PM

February 14, 2019

Mohit Bansal (philomath)

Moving!

It's been a long time, since I published anything here, but that doesn't mean I stop writing, I kept writing everyday, just didn't publish anything so far since October 2018. This post is the announcement of moving my blog from here to somewhere else. I know, I should have published my writings but the reason I didn't is, blogger doesn't support markdown, and for the same reason, I will be

by Abstract Learner (noreply@blogger.com) at February 14, 2019 09:25 AM

February 13, 2019

Prashant Sharma (gutsytechster)

Get start with Django Rest Framework

Hey there everybody!
I was learning the concept of APIs to get start with Django Rest Framework(popularly known as DRF). As soon as I understood its basics, I headed on towards DRF. It was really fun learning it and I bet you will have it also.

Let’s get started

I am directly going to jump on coding and then we are going to understand the things on the way. We will be creating an event reminding app API throughout this tutorial.

So, let’s start with creating a virtual environment. Virtual environments are very helpful when you are working with different projects with same dependencies with different versions. And they ease out a lot of work for sure. So, we are going to use them. There are a few options to create virtual environment in python. Though I am going to use pipenv.

If you don’t have it installed then go ahead and install it using pip.

$ pip install pipenv

Now then create a directory anywhere in your system. I’d prefer to be in home.

~$ mkdir RemindEvent && cd RemindEvent

Once we are inside the directory, we create the virtual environment as

~$ pipenv install django djangorestframework

The above command will create a virtual environment along-with installing the python packages `django` and `djangorestframework`. Once it’s done, we can activate our virtual environment as

~$ pipenv shell

Now, you would be seeing the terminal prompt starting with (RemindEvent).

We’ll now start the project using django command as

(RemindEvent)~$ django-admin startproject RemindEvent

Now then once we create the project, we will be creating an app using django. First go into RemindEvent directory in your main folder and then run

(RemindEvent)~$ python3 manage.py startapp Event

Once you are done with it. Your directory structure would look as

.
├── Pipfile
├── Pipfile.lock
└── RemindEvent
    ├── Event
    │   ├── admin.py
    │   ├── apps.py
    │   ├── __init__.py
    │   ├── migrations
    │   │   └── __init__.py
    │   ├── models.py
    │   ├── tests.py
    │   └── views.py
    ├── manage.py
    └── RemindEvent
        ├── __init__.py
        ├── settings.py
        ├── urls.py
        └── wsgi.py

If it looks like this, then great work! Now our project is setup perfectly and we are ready to get our hands dirty with code.

Since we have created the app, we need to register it in settings.py. However along with the app we would also need to register rest_framework app.

INSTALLED_APPS = [
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',
    'rest_framework',
    'Event',
]
  • Models

We will be creating models to store database. Hence we need to define its schema. So, go ahead and write the following in models.py.

from django.db import models


class Event(models.Model):
    """This class represents Event model"""

    name = models.CharField(max_length=255, blank=False)
    creation_date = models.DateTimeField(auto_now_add=True)
    modified_date = models.DateTimeField(auto_now_add=True)
    alert_date = models.DateTimeField()
    alert_interval = models.DurationField()

    def __str__(self):
        return f"{self.name}"

Once we are done creating models we need to perform migrations

(RemindEvent)~$ python3 manage.py makemigrations
(RemindEvent)~$ python3 manage.py migrate

It will create the corresponding database tables into your django project.

  • Admin

Now then we are done with creating models, we go ahead and register them in admin.py so that it appears in Django’s default admin panel.

from django.contrib import admin
from .models import Event

admin.site.register(Event)
  • Serializer

Now this is something where DRF actually participates. Serializers help to convert complex data like model instances into python native data types which can then be rendered into types like JSON or XML which acts as a request-response data format. Just as ModelForm defines set of rules to directly convert model fields into form fields, rest_framework’s serializer class provides ModelSerializer.
Now then we know what serializers are, let’s create them. Create a new file in your Event directory as serializers.py and write the following into it

from rest_framework import serializers
from .models import Event


class EventSerializer(serializers.ModelSerializer):
    """This class serializes the Event model instance into formats like JSON"""

    class Meta:
        model = Event
        fields = ('id', 'name', 'creation_date',
                  'modified_date', 'alert_date', 'alert_interval',)
        read_only_fields = ('creation_date', 'modified_date',)

We inherit the ModelSerializer class provided by rest_framework.serializers to our EventSerializer class and defines it like it. ModelSerializer class itself maps the each model field to its corresponding serializer field. We define creation_date and modified_date fields as read only ie they can’t be edited manually.

  • Views

We define class based views while creating APIs. Though one can use function based views also. But class based views has its own advantage. It provides better code reusability, cleaner and less code and better coupling. Especially, since DRF provides built-in django functionalities in the form of class, we can inherit them and override their features as per our requirements. DRF provides generic built-in views to ease out our work.

Well, that’s enough talking. Let’s write some code in views.py file.

from rest_framework import generics

from .serializers import EventSerializer
from .models import Event


class CreateView(generics.ListCreateAPIView):
    """This view performs GET and POST http request to our api"""
    queryset = Event.objects.all()
    serializer_class = EventSerializer


class DetailsView(generics.RetrieveUpdateDestroyAPIView):
    """This view performs GET, PUT and DELETE http requests to our api"""
    queryset = Event.objects.all()
    serializer_class = EventSerializer

Now let’s understand what does all this do. Firstly we import generics module from rest_framework app which actually contains View classes ListCreateAPIView and RetrieveUpdateDestroyAPIView. These view classes provides functionalities for basic CRUD operations. As in our Event app also, we would need to create, retrieve, update or delete the events. CreateView class performs listing all the available events as well as creating any new event while DetailsView class performs Retrieving, Updating and Deleting any event.
In each class, we override the built-in attributes queryset which will be used for returning the object and the serializer_class which should be used for validating and deserializing input, and for serializing output. There are other few attributes and functions that can be overridden according to the requirements. You can find them about here.

  • URLs

As soon as we are done with creating views, the only thing left is to set up urls. Firstly create the urls.py file in our Event app and write following into it.

from django.urls import path
from rest_framework.urlpatterns import format_suffix_patterns
from .views import CreateView, DetailsView

urlpatterns = [
    path('events/', CreateView.as_view(), name='create'),
    path('events/<int:pk>/', DetailsView.as_view(), name='details')
]

urlpatterns = format_suffix_patterns(urlpatterns)

Here we have used as_view() method with class-based views so as to return a callable view that takes a request and returns a response. It’s because we can’t use class-based views just like normal function views. Another thing to mention that we have used format_suffix_patterns. It allows us to specify data format when we use the URLs. It appends the format to be used in the URL of every pattern.

Now next thing to do is to link these URLs to the project level urls.py. In RemindEvent/urls.py write

from django.contrib import admin
from django.urls import path, include

urlpatterns = [
    path('admin/', admin.site.urls),
    path('api/v1/', include('Event.urls')),
]

Here we have versioned our api, hence used v1 in urlpattern.

Now is the time to fire up the browser. But before that let’s create a superuser to login into admin panel.  Just go to the terminal and inside of RemindEvent type

(RemindEvent)~$ python3 manage.py createsuperuser

Once done creating super user, run the server using

(RemindEvent)~$ python manage.py runserver

Now go to the 127.0.0.1:8000/admin and create some events. After it head on to http://127.0.0.1:8000/api/v1/events/

DRFExampleBrowsable API offered by DRF

One of the main advantages of DRF is that it provides the browsable interface for the testing of API. As you can see once we go to the endpoint /events/ it list out all the available events and also provides the interface for creating a new event using POST http request.

Now go to the http://127.0.0.1:8000/api/v1/events/2/

DRFExample2

At the endpoint /events/2/ we can see all the information regarding the event with id 2. It also provides interface for updating and deleting the event using PUT and DELETE http requests respectively.

Isn’t it amazing! I feel it’s awesome.

And here we reached the conclusion of this blog post. It was just an introduction and there is more to know about it. We still haven’t performed any authorization or authentication as to who can access our api or generating tokens to track its users. As I said, it’s just a small introduction. I’ll come up with these topics as soon as I learn.

References

  1. https://www.django-rest-framework.org/
  2. https://medium.com/backticks-tildes/lets-build-an-api-with-django-rest-framework-32fcf40231e5
  3. https://scotch.io/tutorials/build-a-rest-api-with-django-a-test-driven-approach-part-1

This is it for now. Bidding you goodbye! Meet you next time.

Till then be curious and keep learning.

by gutsytechster at February 13, 2019 07:06 PM

Shivam Singhal (championshuttler)

2018 Year in Review with AMO

Sibelius Monument, Finland

Each one of us have some goals to complete, things to learn, and places to visit. With the year getting ended, it is time to lock back and see what all did we do for the last 365 days.

Well, 2018 have been a phenomenal year for me. Working with Addons aka AMO Team is where the major part of 2018 was spent. I have learned how to work remotely with a cross-cultural team. I have met some super awesome people like Caitlin , Rebecca and many more. I fixed ~50 Bugs in AMO. I got to meet a lot of great people, built connections and learned things. I am really happy to see few of my goals getting completed. I failed of the things miserably too.

Here is everything I did in the last year.

January

  • Got the idea for create-web-ext — a scaffolding tool for browser extensions.
  • Talked to my mentor Trishul about it.

February

  • Pitched the idea of create-web-ext to Mozilla Addons team and asked to submit it as GSoC Project.
  • Declined as GSoC Project. Decided to go ahead to develop it.
  • Made team with my Mentor Trishul and Tushar to start working on the project.
  • First International Flight to Finland for Methane Hack. Won 1500 Euros.

March

  • Spent many sleepless nights with Trishul, Tushar to work on create-web-ext.
  • Made the prototype of create-web-ext. Trishul pitched it in Addons Show & Tell Meeting. Got good feedback about it 🌟.

April

  • My first code contribution to AMO, a small patch for amo-frontend.
  • Was working on another patch, sadly never completed it, huh.
  • Applied for the Featured Addons Advisory Board. REJECTED 😎

May

  • Fixed 8 bugs in addons-server and amo-frontend
  • Was working on twitter card implementation for addons, sadly never completed it. Felt demotivated so many times due to this bug.

June

  • Send 9 patches to addons-server and amo-frontend. Learned about the css property: word-wrap: break-word;
  • Went to Finland again to OuluHack Hackathon. Won 1000 Euros 💵

July

  • Sent 3 patches in amo-frontend.
  • Made the dropdown on AMO better. Learned about test assertions.

August

  • Fixed 6 bugs in addons-server and amo-frontend.
  • Deployed Static themes on production on AMO Frontend.
  • Learned that RTL means Right to Left and LTR Left to Right.
  • Wrote code in SQL for the first time ever for AMO Server.
  • Gave talk about browser extensions in DevConf
  • Met dgplug members Farhaan, Sayan and many others in DevConf’18.

September

  • Fixed 10 bugs in addons-server and amo-frontend.
  • First patch to Webextensions API.
  • Went through many sleepless night to setup Gecko on my laptop for the patch. Took more than 15 days. 🤓
  • Decided to dual boot with Fedora OS for Gecko.
  • Sat next through to Wifi router for ~8 Hours to setup Gecko.

October

  • Sent 5 patches to addons-server and amo-frontend.
  • Added developer policies in footer of AMO.
  • PyCon India, my 2nd time , which I attended as a volunteer.
  • Met dgplug members again in PyCon.
  • Applied for Mozilla Addons Reviewer. Rejected. Lesson learned — need to work on my JS Skills.

November & December

  • College Exams, practical and lot of college useless stuff.
  • Managed to solve 5 bugs in the mean time only.
  • Joined Featured Addons Advisory Board for next 6 months.

My plans for 2019

  • Helping beginners For 2019, I am looking to help few handful new code contributor to AMO Project because I feel while contributing in code you get to learn a lot of things like how to communicate, code is just one part of it .
  • More patches. I am looking to submit patches to Addon Manager and Webextensions API in Firefox.
  • Eat, sleep, code, gym, repeat. Being a software developer you are most likely to keep sitting on your chair for the major part of your day. This year I want to take out more time for physical activities.

by championshuttler (Shivam Singhal ) at February 13, 2019 05:27 PM

February 12, 2019

Kuntal Majumder (hellozee)

How Design Works?

Note: The title may be misleading, 😛 I started dabbling with graphic design back when I was in 7th grade, that time I saw someone working in Photoshop, probably was extracting a person and putting that extracted piece on top another picture, the wow moment that was and I be like I also want to do that, so got a copy of Adobe Photoshop 7, technically a pirated copy but well, Photoshop 7 was not being sold anymore back then and I was just experimenting with it, so ethically I was like, okay, it doesn’t hurt anyone, let that be so.

February 12, 2019 10:11 PM

How Design Works?

Note: The title may be misleading, 😛 I started dabbling with graphic design back when I was in 7th grade, that time I saw someone working in Photoshop, probably was extracting a person and putting that extracted piece on top another picture, the wow moment that was and I be like I also want to do that, so got a copy of Adobe Photoshop 7, technically a pirated copy but well, Photoshop 7 was not being sold anymore back then and I was just experimenting with it, so ethically I was like, okay, it doesn’t hurt anyone, let that be so.

by hellozee at disroot.org (hellozee) at February 12, 2019 10:11 PM

February 08, 2019

Bhavin Gandhi

Where is bhavin192?

It’s been nearly a year since I have posted anything on my blog. So, where I was? I was planning to migrate this blog from the WordPress setup to a static site generator. Started doing that, later got busy with other stuff and it kept getting delayed. But I really wanted to write a new blog post once I get the site migrated. Finally I have the blog migrated completely to HUGO.

by @_bhavin192 (Bhavin Gandhi) at February 08, 2019 06:19 PM

February 06, 2019

Jagannathan Tiruvallur Eachambadi

Ansible 101 by trishnag

We had an introductory session on Ansible in #dgplug and these are some notes from the class. 1. Learned about hosts file to create an inventory, https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#hosts-and-groups

  1. Different connections (ssh and local for now). I had also tested it against a server running CentOS.

  2. We then went on to create an ansible.cfg file in the demo directory which takes precedence over the global configuration.

  3. Learned to write a basic playbook which is a YAML file.

    • /bin/echo using shell module

    • ping using the ping module

by Jagannathan Tiruvallur Eachambadi (jagannathante@gmail.com) at February 06, 2019 07:15 PM

February 05, 2019

Shiva Saxena (shiva)

A quick tutorial on Ansible

Hello all! Today, we had an ansible session in #dgplug by trishnaguha. Before the session, I just had an idea about ansible, that it is used in sort of YAML deployment or something. But never really tried it before. It was a nice experience using ansible. Let me give you a quick wrap up of the session.

What is Ansible?

Ansible is an IT automation tool. It can configure systems, deploy software, and orchestrate more advanced IT tasks such as continuous deployments or zero downtime rolling updates.

On a simple note you can automate tasks with ansible 🙂

Read about this tool from Ansible Documentation.

The Tutorial Begins

Prerequisite:

  • GNU/Linux
  • Ansible >= 2.6.0
  • SSH key-pair
  • openssh-server

Step by Step:

1. Run sshd (if it is not running):

$ sudo systemctl start sshd

2. Copy your ssh key to localhost

$ ssh-copy-id <username>@127.0.0.1

3. Run first ansible command

$ ansible all -i "localhost," -c local -m ping

It returned SUCCESS pong

4. Say hello to localhost 🙂

$ ansible all -i "localhost," -c local -m shell -a '/bin/echo hello'

It returned hello

5. Create a directory and go inside it:

$ mkdir demo; cd demo

6. Create a file here:

$ touch hosts

7. Put some content in the file:

 $ echo "localhost ansible_connection=local" >> hosts

This file hosts  is known as inventory.

The way we added localhost in our custom inventory, we call it ungrouped hosts.

See default hosts file of ansible in your system at /etc/ansible/hosts

8. Run ansible using our custom inventory.

$ ansible all -i hosts -m shell -a '/bin/echo hello'

It returned:  hello

9. Edit the inventory now (to make localhost a grouped host).

Put [webserver] (groub label) above localhost ansible_connection=local and the content of hosts file becomes.

[webserver]
localhost ansible_connection=loca

10. Run ansible again using group name.

$ ansible webserver -i hosts -m shell -a '/bin/echo hello'

Till now, all the ansible commands we have used are called ad-hoc commands, which is something that you might type in to do something really quick, but don’t want to save for later.

11. Now have a look at playbook

As Trishna informed us about “playbook” in her own words:

Till now we were passing all operations need to be executed via command line argument. We would not want to run these modules/task as argument every time we want to configure something as it will neither be feasible if we want to execute multiple operations at a time and we want the operations to be saved.

This is where the term “playbook” comes into play. Playbook is a YAML file that contains one or more plays where each play contains target host and performs a series of tasks on the host or group of hosts, specified in the play.

And a bit about modules:

Modules are the programs that perform the actual work of the tasks of a play. The modules referenced in the playbook are copied to the managed hosts. Then they are executed, in order, with the arguments specified in the playbook

Argument -m  in above ansible commands specified the module to use.

12. Create a playbook file (.yml)

$ touch demo.yml

13. Put content in demo.yml (care about indentation)

- hosts: webserver
  connection: local

  tasks:
  - shell: /bin/echo hello

Explanation of the content:

webserver name of group.
– Using the connection plugin we want to communicate with the host.
– The keyword tasks contains the operations that are to be performed on the destination host.
– Each operation <module (shell) with its arguments/options> are called task. We can add multiple tasks under this section.

14. Run playbook

$ ansible-playbook demo.yml -i hosts -v

ansible was the command we were using for ad-hoc commands, whereas ansible-playbook is the command for running playbook.

15. Edit playbook file

Now we tried 2 tasks in playbook, content of demo.yml becomes

- hosts: webserver
  connection: local

  tasks:
  - shell: /bin/echo hello

  - ping:

16. Run playbook again

$ ansible-playbook demo.yml -i hosts -v

17. Create a custom ansible.cfg

Certain settings in Ansible are adjustable via a configuration file: ansible.cfg

Default configuration can be found here: /etc/ansible/ansible.cfg

Let’s create our own custom ansible.cfg

$ touch ansible.cfg

18. Add following contents to ./ansible.cfg

[defaults]
inventory=hosts

Explaination of the content:

[defaults]  is the tag in ansible.cfg file, where we can pass certain configuration for our playbooks.
– Here inventory=hosts means we are telling ansible to use the inventory file “hosts”

19. Run playbook again (Note: we do not have -i hosts anymore)

$ ansible-playbook demo.yml -v

Ansible will always look for ansible.cfg first in the current directory then in the default directory.

20. Read more about ansible 🙂

Useful references

Conclusion

With this, we have reached the end of this post, overall I found ansible to be a great tool in which definitely there is so much to learn. I would like to thank Trishna Guha for giving an amazing session!

Thank you!

See you in the next post 🙂

 

 

by Shiva Saxena at February 05, 2019 04:36 PM

Failed to connect to lvmetad: booting issue

I have gone through: “A connection to the bus can’t be made”
And: “ERROR No UMS support in radeon module!”
Now dealing with: “Failed to connect to lvmetad”
This trilogy has become a funny and an unexpected blog series on “A habit of Learning”. Let’s find out, do I get rid of these booting issues once and for all, or another error is keeping an eye on me? (Noooooo!)

Machine Specs:

https://www.sony.com.sg/electronics/support/laptop-pc-vpc-series/vpcea45fg/specifications


** Grab Your $1000 Student Scholarship **
I am excited to share as my blog’s announcement:

apply.freshprints.com/scholarship6/?


Problem:

* Type: Booting time issue
* Effect: Slow booting
* Error Message:

failed to connect to lvmetad

* Brief Explanation: After upgrading from Ubuntu 16.04 to 18.04 and then updating my graphic drivers. I am getting this error while booting. Making the booting slow.

Cause:

Kernal Bug: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=799295

What is lvmetad?

From man page of lvmetad:

lvmetad is a metadata caching daemon for LVM. The daemon receives notifications from udev rules (which must be installed for LVM to work correctly when lvmetad is in use). Through these notifications, lvmetad has an up-to-date and consistent image of the volume groups available in the system. By default, lvmetad, even if running, is not used by LVM. See lvm.conf(5).

Solution:

This is the key reference I used to resolve this issue and speed up my booting time:

Answer of Shahriar Shovon solved the issue for me:
https://support.linuxhint.com/question/lvm-issue-ubuntu-18-04-failed-to-connect-to-lvmetad/

What I did:

Edit /etc/lvm/lvm.conf file with the following command:

$ sudo nano/etc/lvm/lvm.conf

Now, find the line use_lvmetad=1 and change it to use_lvmetad=0

 

Now, run the following command to update the initramfs file for the new kernel:

$ sudo update-initramfs -k YOUR_KERNEL_VERSION -u

$ sudo sync

Command to update initframs may differ for different distros. And to get my Kernal_version I just pressed tab during the command $ sudo update-initramfs -k <tab> and available kernal version appeared. And I selected the latest one.

Reboot and you are good to go!


** Grab Your $1000 Student Scholarship **
I am excited to share as my blog’s announcement:

apply.freshprints.com/scholarship6/?


Conclusion:

I just disabled that service (the best I could do) to get rid of this issue.

After this, I was able to boot without lvmetad error. But wait…
Not again! Still the booting is slow (~2 minutes) and it stuck at a line for around 15-20 seconds that says:

Scanning for btrfs file systems

I found this solution: https://unix.stackexchange.com/questions/78535/how-to-get-rid-of-the-scanning-for-btrfs-file-systems-at-start-up, but never used it. I am okay as far as I am not getting any error, failed type of words in booting. So that’s it. All okay now! 🙂

What you think? Should I remove btrfs-tools from the system? Let me know in the comments section below.

I hope this post will help someone.

Thanks for reading!

by Shiva Saxena at February 05, 2019 10:45 AM

[drm:radeon_init[radeon]] ERROR No UMS support in radeon module!: booting issue [solved]

This post is in continuation with my previous blog post about “A connection to the bus can’t be made: booting issue”. So, as soon as I got rid of my prior booting error I got another one. But this time it was more specific (related to radeon device) and easy to find out the solution over web. Here is what worked for me.

Machine Specs:

https://www.sony.com.sg/electronics/support/laptop-pc-vpc-series/vpcea45fg/specifications

Problem:

* Type: Booting time issue
* Effect: Slow booting
* Error Message:

[drm:radeon_init[radeon]] ERROR No UMS support in radeon module! [Solved]

* Brief Explanation: After updating my <GNU/Linux>. I am getting this error while booting. Making the booting slow.

Cause:

After going through this, I found out that the reason is with graphic drivers. I didn’t follow the solution listed there but did something more straightforward.

Solution:

Install/update/upgrade graphic drivers. That’s it! I followed the instruction of Joshua Besneatte in this answer https://askubuntu.com/questions/1066105/how-to-install-amd-graphic-drivers-on-ubuntu-18-04 which are as follows:

sudo apt update
sudo apt upgrade
sudo apt autoremove
sudo apt autoclean

Now, add the AMD updates PPA and update:

sudo add-apt-repository ppa:oibaf/graphics-drivers
sudo apt-get update
sudo apt upgrade

Then reconfigure your packages to be safe:

sudo apt install --reinstall xserver-xorg-video-amdgpu
sudo dpkg --configure -a
sudo dpkg-reconfigure gdm3 ubuntu-session xserver-xorg-video-amdgpu

Now simply reboot. It worked for me. 🙂

Next: Failed to connect to lvmetad: booting issue

Conclusion:

I believe that getting the updated graphic drivers is the permanent solution to the error discussed in my previous post. Because somewhere I knew from the start that the problem is associated with GPU drivers. Because I was able to boot perfectly with Linux Mint in compatible mode (that is without GPU acceleration).

I gone through 2 booting issues one after another. Still, I don’t know why these errors were attacking me back to back, one by one, just message kept on changing. Because still, I was not getting clear and fast booting. Quiet hilariously, after resolving this issue, again I was dealing with another one, and this time the message was:

failed to connect to lvmetad

Solution in the next post.

I hope this post will help someone and may they not get into the next error in queue like me. But if you know any other solution regarding the same issue, please do write in the comments section below, that would be helpful for someone else.

Thanks for reading! 🙂

by Shiva Saxena at February 05, 2019 10:32 AM

A connection to the bus can’t be made: booting issue [solved]

Hello and welcome back to “A Habit of Learning”! Couldn’t write for so long due to repetitive health issues then exams then an exciting chess tournament then a bit of LaTex and cookiecutter like tools and then here I am. Before I write about anything else of what all I have been going through these days, I feel there is a need of THIS blog post as I couldn’t find many solutions over the web regarding the issue.

Recently, I went through this issue while installing Ubuntu 18.04 in my relative’s laptop.

Machine Specs:

https://www.sony.com.sg/electronics/support/laptop-pc-vpc-series/vpcea45fg/specifications

Problem:

* Type: Booting time issue
* Effect: Unable to boot completely
* Error Message:

(gvfsd-metadata:743): GUdev-CRITICAL **: 00:18:28:319: g_udev_device_has_property: assertion 'G_UDEV_IS_DEVICE (device)' failed
A connection to the bus can't be made

* Complete error: Image shown below [while trying Linux Mint (same issue was there in Ubuntu 16.04 and 18.04).

connection_to_bus

* Brief Explanation: Tried to boot my <GNU/Linux distro> and initial <seconds> appear to be normal booting. Then comes a black screen and nothing happen thereafter except a message shown repeatedly: “A connection to the bus can’t be made”. I waited for around <minutes> but system was unable to boot completely.

Cause:

Can’t say what the actual reason might be. But as far as I searched over the web this could arise due to dedicated GPU your machine has (at least it was in my case).

Scenarios:

There are 2 possible scenarios I have experienced which are as follows:

1. In booting while trying Ubuntu 18.04, Mint – Cinnamon with Live USB/CD
2. In booting after installation.

One of my friend also had the same issue while shutting down his Ubuntu 18.04. :p

Solutions:

A couple of solutions I tried to get rid of this issue, I can’t say which may work for you 🙂

One of my friend solved this issue after upgrading his OS (from Ubuntu 16.04 to 18.04).

So I tried it first. I installed Ubuntu 16.04 and then upgraded it to 18.04. But that didn’t work instead became the cause of error discussed in my next post. All solutions listed below and in successive posts (2nd) and (3rd)  of this series are done on Ubuntu 18.04.

1. Arrow Keys [Before/After Installation]

Appear to be silly, but worked for me 🙂
After 5-10 seconds of normal booting, I tried to hit arrow keys up, down, left, right – repeat 2-3 times and 2-3 times pressing Enter key (More than a solution it appears to be an act of frustration, which exactly it was). And viola! My laptop booted completely after 30-70 seconds. But problem persisted in next boot. So, it is not a permanent solution but surprisingly may work as a temporary work around.

This ^^ is not be the best solution. Of course. Let’s see some other alternative that worked for some people.

2. Setting Nomodeset [Before/After Installation]

The reason being that

nomodeset

The newest kernels have moved the video mode setting into the kernel. So all the programming of the hardware specific clock rates and registers on the video card happen in the kernel rather than in the X driver when the X server starts.. This makes it possible to have high resolution nice looking splash (boot) screens and flicker free transitions from boot splash to login screen. Unfortunately, on some cards this doesn’t work properly and you end up with a black screen. Adding the nomodeset parameter instructs the kernel to not load video drivers and use BIOS modes instead until X is loaded.

source: https://ubuntuforums.org/showthread.php?t=1613132

This solution also worked for me. To setup option nomodeset there are 2 cases:

2.1. While trying OS with Live USB/CD

In this case while in booting menu: Press F6 and choose “nomodeset”. And you would be able to boot properly. I did this and installed the OS with a hope that the issue will go away after complete installation. (but it didn’t)

Screenshot_2019-02-04 mostrar-opciones-arranque-ubuntu-li png (PNG Image, 465 × 308 pixels)

2.2. While booting after installation.

As written in this link https://askubuntu.com/questions/38780/how-do-i-set-nomodeset-after-ive-already-installed-ubuntu/:

  • While booting press –> Shift (to go to grub menu)
  • While in boot menu –> Press ‘e’
  • Find the line start with `linux`
  • Replace -> “quiet splash” with “nomodeset” or add “nomodeset” before “quiet splash”
  • Press –> CTRL + X to boot

Once the booting is completed you need to setup this “nomodeset” permanently in your grub configuration using the instruction of Coldfish in this answer https://askubuntu.com/questions/38780/how-do-i-set-nomodeset-after-ive-already-installed-ubuntu/ which are:

sudo vim /etc/default/grub

and then add nomodeset to GRUB_CMDLINE_LINUX_DEFAULT:

GRUB_DEFAULT=0
GRUB_HIDDEN_TIMEOUT=0
GRUB_HIDDEN_TIMEOUT_QUIET=true
GRUB_TIMEOUT=5
GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian`
GRUB_CMDLINE_LINUX_DEFAULT="quiet splash nomodeset"
GRUB_CMDLINE_LINUX=""

And then save and exit :x , then simply run:

sudo update-grub

Now, reboot and you are good to go. This also worked for me. But still, 🙂 nomodeset is a temporary solution which simply avoids the cause of solution. It doesn’t solve the cause in itself.

3. Update Distro [After Installation]

Simply run:

sudo apt update
sudo apt upgrade
sudo apt dist-upgrade
# And just to avoid any doubt
sudo apt full-upgrade

I did all 3 solutions listed above. And finally, I was not getting the error “A Connection to the bus can’t be made” in successive booting.

Next: [drm:radeon_init[radeon]] ERROR No UMS support in radeon module!
Next to Next: Failed toconnect to lvmetad: booting issue

Conclusion:

I knew it, that “nomodeset” is just a temporary solution. So, I tried to boot again only with default options i.e “quiet splash”. And yes, I didn’t get the previous error line! (might be during the update the problem got solved)

But for me, it didn’t come with a clear win because now, I was dealing with the second issue in line. And this time the error was:

[drm:radeon_init[radeon]] ERROR No UMS support in radeon module!

With an hour of research I was able to solve this error as well. And I think the solution to this new error is the permanent solution of that previous one. Why do I think so, and what is the solution to this new error? Will soon write my next post unveiling the same.

I hope this post will help someone. And if you know any other solution regarding the same issue, please do write in the comments section below, that might also help someone else.

Thanks for reading!

 

by Shiva Saxena at February 05, 2019 10:12 AM

February 03, 2019

Kuntal Majumder (hellozee)

Enough of Youtube

“Youtube”-ing seems like a trendy hobby for most of the people of my age, vlogs seem to be taking over blogs but I don’t have the required setup to record videos nor the time to invest in making videos.

by hellozee at disroot.org (hellozee) at February 03, 2019 03:16 PM

Enough of Youtube

“Youtube”-ing seems like a trendy hobby for most of the people of my age, vlogs seem to be taking over blogs but I don’t have the required setup to record videos nor the time to invest in making videos.

February 03, 2019 03:16 PM

Piyush Aggarwal (brute4s99)

Arch

Simplicity is the ultimate sophistication.
-Leonardo da Vinci

After eons of self-doubt and mixed opinion, I finally decided to get Arch Linux up and running in my laptop!

How it all began?

My mentors at IRC insisted upon switching over to latest Linux distros. The reason was implicit: to work with packages having latest features. My IRC friends at #dgplug suggested me a few flavors to choose from- latest Ubuntu build, latest Fedora build, or a rolling release distribution.

What’s a Rolling Release?

A rolling release is a type of linux distribution model in which instead of releasing major updates to the entire operating system after a scheduled period of time, a rolling release operating system can be changed at the application level, whenever a change is pushed by upstream.

There are a couple of rolling release models – semi-rolling and full rolling – and the difference is in how and what packages are pushed out to users as they become available.

A semi-rolling distribution, such as Chakra Linux and PCLinuxOS, classifies some packages to be upgraded through a fixed release system (usually the base operating system) in order to maintain stability.

A full rolling release, such as Gentoo, Arch, OpenSUSE Tumbleweed, and Microsoft Windows 10, pushes out updates to the base operating system and other applications very frequently – sometimes as often as every few hours!

Why switch?

The main benefit to a rolling release model is the ability for the end user to use the newest features the developer has enabled. For example, one of the newer features of the Linux kernel, introduced with the 4.0 update, was the ability to update the kernel without restarting your computer. In a rolling release distribution, as soon as this update was tested and marked as working by the development team, it could then be pushed out to the user of the distribution, enabling all future updates to the kernel to occur without computer restarts.

What’s new?

For ubuntu users, it’s the same thing as if you come to Linux from Windows; there is a learning curve.

An excerpt from the Arch Wiki states thus:-

Arch Linux is a general-purpose distribution. Upon installation, only a command-line environment is provided: rather than tearing out unneeded and unwanted packages, the user is offered the ability to build a custom system by choosing among thousands of high-quality packages provided in the official repositories for the x86-64 architecture.

Oh, and one more thing- none of the proprietary softwares/packages/drivers come with the base installation. Read more about them here.

If you still think you can steer clear off the proprietary softwares , think again, and one more time.

Theory’s over.

Baby Steps

A few pointers before we start the installation:-

  1. The installation requires a working internet connection. So, I had a wired ethernet connection ready at my disposal. A WiFi module that’s NOT with a Broadcomm Chipset would be just as fine. Since I have the Broadcomm Chipset, I switched to wired connection for the time being.
  2. Once I’ll get the installation done, all I’ll have is a bare-bones installation with a log-in shell. You must absolutely be comfortable with the terminal, as almost no graphical utility comes out of the box.

I grabbed a USB and prepped it with this Arch Linux img. First thing after booting from the USB – I connected to the internet.

In case you have Broadcomm chipset in your WiFi, follow this. You need the driver firmware for the Broadcomm chipset to get it working on your laptop, since it’s proprietary.

Connecting to the Internet

Connecting to internet via Ethernet

Just Plug n Play, you’re good to go!

Connecting to internet via WiFi

1.Create a profile for your wifi in there

# wifi-menu

2.Connect to the profile you set by

# netctl start <profile_name>

3.If you want to enable it to connect automatically at startup

# netctl enable <profile_name>

Connecting to internet via Android USB tethering

1.List all your DHCP interfaces that are now available

$ ls /sys/class/net

2.Connect to the new inteface provided by Arch for your USB tethered device!

# dhcpcd <enp....something_profile_name>

Check if you’re online : $ ping -c3 google.com


There are many good tutorials out there, follow any one of these.

Now that Arch was installed, I booted up the system and got connected to the internet again.

image

Now that I was online, I set up a GUI !

Installing a GUI

So the first thing I decided to get for the Arch was GUI! It’s a quite simple procedure, you need a display manager, and a Desktop Environment to interact to the X server.

X Server

X is an application that manages one or more graphics displays and one or more input devices (keyboard, mouse, etc.) connected to the computer.
It works as a server and can run on the local computer or on another computer on the network. Services can communicate with the X server to display graphical interfaces and receive input from the user.

My choice: # pacman -S sddm plasma

IMPO ! Install a terminal before rebooting to GUI !

# pacman -S konsole

Configuring terminal

Sources + References:-

  1. http://jilles.me/badassify-your-terminal-and-shell/

Configuring weechat

Sources + References:-

  1. https://alexjj.com/blog/2016/9/setting-up-weechat/
  2. https://wiki.archlinux.org/index.php/WeeChat

Surfing through some sites also got me through a good command that would be of much help to most!

/mouse enable # In case you’d like to use the mouse in weechat

/redraw # A saviour for guys SSH-ing to any ZNC

You can’t find the packages through pacman?

Enter AUR : the Arch User Repository

Suppose I have to get a package that cannot be found by pacman. I will try to find it at AUR home page.

for eg : ngrok. Now, after reading description, I know this is the package I was looking for. So, now I will see how I can acquire the package.

Here I can see two ways to acquire the package- by git clone (preferred), or by downloading the tarball.

It gives me one file : PKGBUILD . These PKGBUILDs can be built into installable packages using makepkg , then installed using pacman .

Fakeroot

Imagine that you are a developer/package maintainer, etc. working on a remote server. You want to update the contents of a package and rebuild it, download and customize a kernel from kernel.org and build it, etc. While trying to do those things, you’ll find out that some steps require you to have root rights (UID and GID 0) for different reasons (security, overlooked permissions, etc). But it is not possible to get root rights, since you are working on a remote machine (and many other users have the same problem as you). This is what exactly fakeroot does: it pretends an effective UID and GID of 0 to the environment which requires them.
P.S:-

  • UID: User ID
  • GID: Group ID

The git clone method is preferred since you can then update the package by simply git pull.

Why so much fuss ?

You can always try out AUR helpers. I set up yay in my configuration, since it also shows DIFFs when installing new/upgrading packages through AURs.

Why would you want to read DIFFs?

Essentially, it’s a shell script,(so it can possibly have mailicious / dangerous content, so look before you leap) but since it’s ran as fakeroot, there is some level of security albeit. Still, we shouldn’t try and push our luck.

So after all this, I successfully set up Arch Linux, WiFi, Desktop Environment, Terminal and Weechat in my laptop! Next was installing basic software packages and fine tuning the GUI to my personal tastes.

Firefox Developer Edition – For Web Browsing

tor-browser – For private internet access

Konsole – Terminal

Deepin Music Player – Music Player

Gwenview – Image viewer and editing solution

Steam – for Games

Kontact – for updates on calendar events

VLC – Video player The end result

image

beautiful, isn’t it?

Setting up a personal Arch Linux machine taught me many things about the core Linux system, how exactly the system is set up during installation and how different utilities orchestrate to form my complete workstation ready to build beautiful code and software!

February 03, 2019 11:31 AM

February 02, 2019

Kuntal Majumder (hellozee)

A Couple of Words

Speaking from a normal person’s point of view, do praises hurt? or if I can phrase it better, what hurts more, praise or criticism? An appropriate answer would be none, if you can take praise with your head held high you must be able to take criticism with the same attitude, right?

February 02, 2019 05:41 PM

A Couple of Words

Speaking from a normal person’s point of view, do praises hurt? or if I can phrase it better, what hurts more, praise or criticism? An appropriate answer would be none, if you can take praise with your head held high you must be able to take criticism with the same attitude, right?

by hellozee at disroot.org (hellozee) at February 02, 2019 05:41 PM

January 29, 2019

Piyush Aggarwal (brute4s99)

Contributing to pandas

pandas
pandas: powerful Python data analysis toolkit

for PyDelhi DevSprint 02/02/19

pre-DevSprint reading material:-

Homework

0. Remove existing pandas installation

```
pip uninstall pandas
```

1. Fork me!

2. Clone the fork to your PC.

3. Install pandas from source.

  • cd into the clone and install the build dependencies.

    python -m pip install -r requirements-dev.txt
  • Build and install pandas. (takes ~20 minutes on an i5 6200U with 8GB RAM)

    python setup.py build_ext --inplace -j 4 
    python -m pip install -e .

Background

Work on pandas started at AQR (a quantitative hedge fund) in 2008 and has been under active development since then.

Chat with more pandas at Gitter.im!

Some Tips

Bad Trips

I accidentally rebased on origin/master. That was ~350 commits behind upstream/master !

Steps taken:-

  • reverted HEAD to just before rebase
  • merged upstream/master into origin/is_scalar
  • updated origin/master to get NO diffs in upstream/master and origin/master
  • ran git rebase origin/master and fixed a conflict in doc/source/whatsnew/v0.24.0.rst
  • pushed to origin/is_scalar.

Stay safe and make the internet a healthier place!

January 29, 2019 07:31 PM

January 27, 2019

Pradhvan Bisht (pradhvan)

Making things count

By the end of last year, I graduated and as I like to call it my life’s free trial ended 😛 and starting this week(21Jan,2019). I had my first day of work. Simple things have been complex in the past few months but I guess I survived with a lot of help thanks to good people around me.

So it all started by the last semester by the starting of Feb 2018, I had done a couple of bad interviews and even if the interviews were good I wasn’t confident that I would fit in. Maybe it was imposter syndrome or something else I don’t know. Later by graduation, I ended choosing unemployment and giving some more time to just code random stuff. 😛 The main reason behind writing this blog is to give you the up’s and down’s so you can get a reality check of what’s actually like because I have been talking to some of my college juniors who are now I in the same phase where I was one year back.

Just a bit of background on me to get things very clear from the start, I started coding in Python seriously a bit late like by the start of my third year in college, by seriously I mean like coding daily or maybe looking up small patches in upstream open source projects. I had been active in the local meetup group PyDelhi, ILUG-D and had been introduced to #dplug recently so I wasn’t complete noob in the world of open source tech in general. To put it nicely I was LAZY, I am not proud of it but yeah <pip install regret >.

So the journey started after college by the start of the first of August, I had some family problems in July so getting used to it back home took some time. I had read a lot of blogs about people taking a break and learning to code but most of them were about someone who had not coded in their like and in coming six months taught themselves to code and got a job so could not relate to it. So starting the first two weeks I use to check out the syllabus of coding boot camps and roadmaps for becoming a backend engineer, so here is my first mistake that someone should really avoid while traveling on the same road because once a wise friend of mine said

Life is too small to make all the mistakes yourself sometimes it’s best to learn from others mistakes.

So coming back to the point,

Talk to people even if don’t want to:  I was kinda lucky to have got a college in Delhi because Delhi has some awesome tech communities but when college ended I went back home, Nainital and was missing the meetup culture. I would still talk to people #dgplug, thank god it’s an online community otherwise I would have been completely lost. Even though I use to talk to people I did not actually use to ask for help in figuring things out maybe it was just what these people would think or maybe it was that people would ask me not to do it and get whatever job I was offered ASAP. This changed when I read 6 Bags and a Carton   by very own @fhackdroid . 😛 It was that moment I thought I am lucky and this wasn’t a bad idea. Later when I was staying with him, Sayan, Chandan, Devesh, Rayan during the time of PyCon India 2018 I opened up on what should I do make the most of the time and he with Sayan helped me a lot to make things clear, what should I be focusing on , what projects can I do and also suggested to read resource centre blog of people who took a break and went to Recuruce Centre to work on their tech skills.

So the point I am trying to make is you should talk to some people around in community even before you start planning that would eventually help a lot in making a concrete roadmap for the next x months you take because you would fail a lot, will be diverted and trust me even sometimes question  that you took the right decision or not. During those times it’s best that you have some experienced people helping you out and what I think is if you have concrete roadmap these feeling will be shrunk to a size where you can just ignore and work.

Time is money: I initially planned for an entire year! God knows what I was thinking. This could be because of the first problem I mentioned of not talking to people, but Yes unless an until you have a job waiting for you or you want to focus on something a subpart of a particular topic trim down the time because it takes time to get stable after your gap. In my honest opinion, three months are more than enough unless an until you’re just switching to learn to code and have not written a single line of code or know nothing about it.

Blogging to success: At first, I use to think that what’s the point of blogging when I see all the awesome blogs out on the internet which would be far better than me. But later I realized it’s not about your blog being the best it’s mainly about consistency because what I think is blogging helps in two ways:

1. It helps you structure your thoughts that you can explain to ‘n’ number of people easily.

2. For all the research you do behind the blog you get to learn a lot and that learning sticks for a long time plus you get a backup of your notes to look back.

I would recommend to someone who is talking his off to code/learn/hack/ build silly stuff to at least write a blog once three days.

Document the shit out of it: I was heavily inspired by OBM so I started documenting my daily working habits, I did set goals and future week goals but the problem I faced was I over-engineered the shit out of it which at some point became tedious. I maintained a bullet journal with waka time installed in sublime text to track my coding timings and also was doing short sprints of 25 min each every time I use to sit to code or read. Things got to a downfall pretty quickly because it required a lot of effort to just maintain the whole workflow, I am not saying it’s bad but it wasn’t for me. I did a lot of iterations of the whole bullet journals and found the simplest one to be easy to maintain and easy to follow.

So I would say don’t get sad and totally give up the idea if you’re not able to follow up the whole idea of documentation, just keep evolving the process until it suits your needs because this will definitely help you realize how much effort have you put in the past and how much you have to put in. This trusts me helps in times when you feel like you’re not doing enough work or you made a wrong decision.

One last thing. All the very best, if you’re talking that road just remember to work hard and things will eventually plan out. If it did for someone like me who had no clue, 😛 you at least have a heads up on things.  🙂

Finally, things wouldn’t have been the same without the help from #dgplug and I definitely owe a lot to them.  The people in the community are always ready to help you in a correct manner not spoon feeding you but making you independent. I got a lot of inspirations from different people in the community to work hard , I hope I can follow the same footsteps 🙂

 

by Pradhvan Bisht at January 27, 2019 06:02 AM

January 26, 2019

Prashant Sharma (gutsytechster)

What are APIs?

Howdy fellows! What’s up?
So, I wanted to start with REST API framework offered by Django. But before I moved any step forward, I realized that I don’t know what an API is. And that’s where I went into the world of API. Yes, there is an other world of APIs where they do everything in their own way, they talk, they walk their own way. They don’t speak languages like we do. They are more technical in that case. They speak in terms of request and response. But just wait, before we go any further, let’s discuss everything little by little.

Let’s start with its full name. The term API stands for Application Programming Interface. Now, I am gonna put quite common analogy for it, though the best, maybe that’s why it is often used to make people understand about APIs. So, when we use our mobile phone or smartphone, we use the interface provided by the hardware in our hand and we can make it do anything. Can’t we? Of course we can. Through that interface we can talk or simply interact with our mobile phone. In similar terms when one software wants to interact with another software, they do it using APIs. APIs are the interface for them. Hence the term signifies.

When we talk about APIs, we often talk about two API paradigms.

  • SOAP
  • REST

We’ll try to understand a little about both of them

SOAP

SOAP stands for Simple Object Access Protocol. As I already mentioned, the application interact with each other using API as an interface in terms of request and response. You send a request to an API to fetch some data and give it back to you in terms of response. That’s the very foundation of how we use API.

soapA SOAP request and response example

Credits: CodeProject

SOAP uses XML notation to format request and response. It provides a higher security as compared to REST. It necessarily needn’t to be used over HTTP eg it can be used over SMTP also. It uses mainly two HTTP verbs – GET and POST. GET for retrieving any data and POST for adding or modifying the data.

REST

REST stands for REpresentational State Transfer. Here the request and response are usually formatted in JSON. Though it can process any of  XML, HTML or JSON. Since JSON is quite easily understandable, it is preferably used. It is built over HTTP ie it can perform all the CRUD operation using different HTTP verbs.

 HTTP verb         CRUD Operation

  1. POST                         Create
  2. GET                            Read
  3. PUT                            Update
  4. PATCH                      Update
  5. DELETE                   Delete

Other than this, REST is made for web as it uses URI(Uniform Resource Identifier) and HTTP.  Consuming an API is as simple as making an HTTP request in REST.

restA REST request and response example

Credits: Cisco Learning Labs

In REST, we give a request to an endpoint and get a response in return. An endpoint is one end of a communication channel. Each endpoint is a location from which API can access the resources they need to carryout operation. Each response contains a status code representing the status of request. A valid request gives 200 OK status and an invalid request may return a 404 NOT FOUND error. You can find a whole list of these status codes here.
While working with APIs, you often come across the term ‘payload’. Payload in programming means the relevant information or data. In APIs, when we talk about payload, then we refer to the data we receive apart from other meta-data like content-type headers. As you may notice in the image above, the response contains the payload as well as other data which is referred to as Response headers. These headers are the meta-data which tells us about the nature of request and response.

Most of the APIs are free to use like Google Map API but till an extent ie they put some restriction on the use of these APIs. Many APIs provide an authentication process to keep track of usage of APIs. The service providers provides an authentication key aka API key. These keys provide a way to identify the root of the request. APIs can also be used as business product.
Though the question of which API to use really depends on one’s use case. To know more about differences between these two, you might want to have a look at this.

References and Further Reading

  1. https://learninglabs.cisco.com/lab/what-are-rest-apis/step/1
  2. http://www.soapuser.com/basics1.html
  3. https://www.upwork.com/hiring/development/intro-to-apis-what-is-an-api/

That’s all for now. It was a great experience learning about APIs and to know such a great thing. I hope it would be the same for you. Thanks for stopping by this post. If you find any mistake or any suggestion regarding this, feel free to comment in the section below. Meet you next time.

Till then, Be curious and keep learning!

by gutsytechster at January 26, 2019 07:14 PM

Piyush Aggarwal (brute4s99)

breaking free

PRIVACY
Free as in freedom

INTRODUCTION

When I started using this static blog, little did I know of all the trackers that came with the supporting resources used in a cool starter like this one. In this post, I explain the various kinds of trackers and also a personal teaspoon of what trackers I dealt with, while sterlising this blog!

How does browser tracking work?

When you visit a website, third-party trackers (cookies, pixel tags, etc) get stored on your computer.

How many trackers exist in any given website depends on how many the website owner has decided to include. Some websites will have well over 60 trackers, belonging to a multitude of companies, while others might have only one - perhaps to track visitor numbers, or see where these visitors are coming from, or to enable a certain functionality. Some might have none at all.

Not all trackers are necessarily tied to companies tracking your browsing habits - but when you accept cookies, you’re saying ok to all the trackers that are there - including those feeding info back to companies.

What is being collected and Why?

Trackers collect information about which websites you’re visiting, as well as information about your devices.

One tracker might be there to give the website owner insight into her website traffic, but the rest belong to companies whose primary goal is to build up a profile of who you are: how old you are, where you live, what you read, and what you’re interested in. This information can then be packaged and sold to others: advertisers, other companies, or governments.

They are also joined by more well-known companies. Some of these are even visible: Google’s red G+ button, for example, is a tracker; Facebook’s “like” thumb is a tracker; and Twitter’s little blue bird is also a tracker.

Why does it affect me?

Data companies and advertisers also know which articles you read and which ones you skip, which videos you watch, and which ones you stop after 5 seconds; which promotional emails you read, and which ones you send to your Trash folder without opening; what you like on Facebook, what you retweet, what you heart on Instagram.

When you put all these things together, you end up with your own unique online fingerprint — which immediately identifies you, with all your likes and dislikes and personal traits

And that’s potentially very bad news, because once they know exactly who you are and what makes you tick, companies and advertisers can:

  • spam you with finely-tuned, targeted ad campaigns that follow you around the web.
  • potentially jack up their prices for you.
  • invade your privacy and chip away at your anonymity online, which nobody likes.

Web Trackers An illustration from a post by Princiya
She writes on awesome topics at FreeCodeCamp, you should check out here posts!

Tracking mechanisms

Cookies

Cookies are the most widely known method to identify a user. They use small pieces of data (each limited to 4 KB) placed in a browser storage by the web server. When a user visits a website for the first time, a cookie file with a unique user identifier (could be randomly generated) is stored on the user’s computer.

Subsequent visits to the Facebook page do not require you to login, because your details will be remembered by the browser through a cookie stored during your first login.

Browser fingerprinting

Browser fingerprinting is a highly accurate way to identify and track users whenever they go online. The information collected is quite comprehensive, and often includes the browser type and version, operating system and version, screen resolution, supported fonts, plugins, time zone, language and font preferences, and even hardware configurations.

These identifiers may seem generic and not at all personally identifying. But, typically only one in several million people have exactly the same specifications as you.

Web beacons

Web beacons are very small, usually invisible objects embedded into a web page or email. Web beacons are also referred to as “web bugs,” which also go by the names “tags,” “page tags,” “tracking bugs,” “pixel trackers,” or “pixel gifs.”

In their simplest form, they are tiny clear images, often the size of a single pixel. They download as an image when the web page is loaded, or the email is opened, making a call to a remote server for the image. The server call alerts the company that their email has just been opened or their web page visited. This is why you should not display images in emails from senders you do not trust.

Web beacons are also used by online advertisers who embed them into their ads so they can independently track how often their ads are being displayed.

The Anonymization Myth

Most companies claim that they don’t identify you by name when they hand over a profile of you - but what does that really mean, when you can be identified easily through all the other information included?

Here’s a good read on anonymization.

Protecting your-self

While companies (sometimes) allow users to take away their data off the company servers (for eg: Google TakeOut and Facebook), one can never be sure if that is the real deal or not. Companies might be still be retaining derivatives or seemingly “anonymous” attributes from user data. As such, it’s always a better move to restrain giving away information as much as possible. Some ways are discussed below.

  1. Use browser add-ons.

    Many add-ons like Privacy Badger from EFF allow for users to take a look at all the third party trackers enabled by the website’s owner, and disable them.

  2. Use Tor or a VPN.

    If you connect to the Tor anonymizing system, or use Tor’s browser, your ISP will only know that you have connected to Tor; from there it loses the data trail. Of course the downside to this is that your browsing will be slower.

    Be aware, your unencrypted traffic to websites outside the Tor network passes through a complete stranger’s exit node: the person running the exit node can watch what you’re doing. All you’ve done is move from your ISP snooping on you to an exit node admin watching you. On the other hand, you’ll cycle through different exit nodes, so it’s harder to be identified and tracked by websites outside the Tor network.

    A virtual private network is an alternative that will work for lots of people, especially if your work has a VPN service that you can use for free. This again will cut off your ISP’s ability to see what you are doing.

    But do some research on your VPN provider. Do NOT use a free VPN provider because they face even stronger financial temptations to sell your information. If you use a VPN, you are effectively giving that company the same level of insight into your online life as your ISP. So pay for one, and check out their policies on what they do with the data they build on you.

  3. Use a different search engine.

    Google offers a wonderful service, but everything you type in its search box is logged and connected to you in as many ways as possible. It is then sold on.

    So why not use a different search engine? DuckDuckGo is an awesome search engine with NO user data logging.This Qoura answer tells more about features of DuckDuckGo.

Getting rid of some trackers from your site

  1. ajax.cloudflare.com

    inherent on websites hosted by Cloudflare’s DNS.

  2. graph.facebook.com

    active when Facebook’s developer services (for eg: FB Comments plugin) are loading on a webpage.

  3. clients6.google.com

    active when webpages directly call Google servers for Javascript codes.

  4. fonts.gstatic.com

    active when Google fonts are called for CSS scripts.

  5. www.linkedin.com

    active when there are links to linkedin in the webpage.

Tracking the trackers

image

Lightbeam from Mozilla is privacy browser extension helps you discover who’s tracking you online while you browse the web.

You can get it here.

Some links

January 26, 2019 04:01 PM

January 25, 2019

Sehenaz Parvin

The Open superstition

I have a grave question about a fact:

Why do parents blame the teachers when the student scores a low grade and simultaneously why do parents congratulate their kids when they score a good percentile???

I don’t want to hurt anyone’s interest. I just want to put forward that in both the cases the student and the teacher are equally contributors. The difference is in this fact that in first case the effort of the student is less and in the second case both are equally contributing to the system.

Am I right??? We should never blame the teachers for your own mistakes. They are the guiders of our life. So , I think next time before blaming the teachers we should think twice about it. It’s actually destroying the psychological mentality of a kid about a teacher.

And same case goes for the students also. Never blame your teachers . First think about your own mistakes then go to any conclusions.

by potters6 at January 25, 2019 01:45 PM

January 20, 2019

Jagannathan Tiruvallur Eachambadi

Thoughts on Atomic Habits Commentary by Jason Braganza

Original post at https://mjbraganza.com/atomic-habits/

  1. Schedule and structure make or break the plan. Goals only show the direction of the task. Personally I would say this has brought all change I need.

  2. Answer in the affirmative. You don’t try quitting smoking, you don’t smoke period. Personally I don’t identify as someone who can’t eat butter or meat but as someone who won’t. This confirms my resolve in what I believe to be done.

  3. Environments are inherently associated with specific habits. I go to the department to work to make it possible to just concentrate on the task at hand instead of procrastinating. This has worked really well and it can be further improved but it is much better than being at home.

  4. Jason mentions a more important point that I had realize earlier but failed to follow through. It is to always repeat and practice something even if one is not good at it. We can always improve on the parts of the task that are lacking rather than ditching the whole task.

  5. I have made running more enjoyable by running with an acquittance and making them a friend. It is more interesting to interact with someone you don’t see everyday.

  6. I will just leave the quote here, “Never miss twice. Missing once is an accident. Missing twice is the start of a new habit.”

I think most of it boils down to building an identity and keep improving it to best serve our needs. For me, it is a matter of compounding the effort put in building up a schedule this month to make it smoother in the coming days. As a nice side effect I am getting 6km of biking done everyday for free :)

by Jagannathan Tiruvallur Eachambadi (jagannathante@gmail.com) at January 20, 2019 07:31 AM

January 19, 2019

Kuntal Majumder (hellozee)

Caring about the directories

The story begins in PyCon India 2018, I, with a tired soul (registration desk is not a quite nice place for resting), shivering in freezing cold, after a dosage of Paracetamol, asked fhackdroid to tell me about any project which uses Go, so that I can also put some patches during devsprint.

January 19, 2019 02:35 PM

Caring about the directories

The story begins in PyCon India 2018, I, with a tired soul (registration desk is not a quite nice place for resting), shivering in freezing cold, after a dosage of Paracetamol, asked fhackdroid to tell me about any project which uses Go, so that I can also put some patches during devsprint.

by hellozee at disroot.org (hellozee) at January 19, 2019 02:35 PM

January 14, 2019

Piyush Aggarwal (brute4s99)

arcanist

Phabricator
Phabricator

INTRODUCTION

This post is dedicated to Arcanist, a command-line interface to Phabricator. Phabricator is a set of tools that help KDE build better software, faster.

Various command-line based solutions out there help developers to acheive good workflow across features and projects(Git, Mercurial et al); Arcanist takes the same approach, but feels a lot more practical to me.

Arcanist User Guide states thus:-

Arcanist provides command-line access to many Phabricator tools (like Differential, Files, and Paste), integrates with static analysis (“lint”) and unit tests, and manages common workflows like getting changes into Differential for review.

Setting up Arcanist

The two dependencies to Arcanist are - git and php. Install them using sudo pacman -S git php (or equivalent for your distro).

Then you can install Arcanist itself. It was as simple as yay -S arcanist for me. arcanist_install Other distros’ users may want to look for Installing Arcanist subsection in Arcanist quick start.

Next up, get the source code of the project you wish to work on by cloning it from cgit.

Now then, let’s dive into development! 🤖

Development with Arcanist

  1. You may find an interesting bug from KDE’s bug tracker or task from your project’s Workboard.

  2. Always create a feature branch/ bookmark before touching any file in a clean clone. Use arc feature for it.

    arc feature name_of_feature_branch
  3. Poke around, play with the code, do your thing.

  4. When ready to submit a patch, type in arc diff. This will also help you maintain your submitted patches. Complete the following forms, that look like:- arc diff

  5. That’s it! Your patch is submitted for review! You also get a link to share with others and see how the submission looks on Phabricator!

  6. Continue hacking on another bug or Task, and wait for the review on the submiited patch!

Remember to make a different feature branch beforehand!

Tips

The world is not perfect, and many-a-times the reviewers will suggest changes to the patch before flashing the green light. Just revisit the branch, do the changes required, and hit arc diff again!

If you’re not sure about this, use arc diff --preview. I always use it before associating a diff with a submission! 😉

arc patch

You can always try out any submitted patch along with the latest master by using the arc patch command!

arc patch D18812

This command will do the following in definite order:-

  1. create a new feature branch with name arcpatch-D18812.
  2. apply patch D18812.
  3. set local tracking to the local branch arcpatch-D18812.
  4. checkout arcpatch-D18812 feature branch.

Don’t worry, even if it’s an old patch, Phabricator remembers the master branch commit the current patch was based on! As an example:-

If you pull a particularly old patch, say D16553, I get a branch based on commit 657dec, whereas the current HEAD of master is 708bcb !

arc feature

Suppose you were at master in your clone, and you do arc feature some_name. Now, some_name branch will be set to track the local master, as in if you commit anything to just the local copy of master that you have, and then git checkout some_name, git will ask you to perform "git pull" as your current branch is behind by some commits.

TL;DR

doing git pull in some_name will import the changes from the last branch you checkouted, before arc feature some_name.

arc land

Perform arc land after you have completed the following checklist:-

  • Your submitted patch has been accepted by reviewers.
  • The reviewer(s) have EXPLICITLY tasked you to land the patch.
  • You do have a Developer Access Account in order to land the patch.

arc land automatically rebases (and errors if that failed), so you don’t have to do that manually, unlike Git.

This quickstart should be enough to get you started on KDE's Phabricator and setting sail on some binary adventures!

January 14, 2019 08:01 PM

Prashant Sharma (gutsytechster)

It’s never too late!

Hey there everyone! It’s me again after a long gap.

So, I’ve been busy or you can say lazy to write about anything. Maybe because I didn’t learn anything significant throughout that time. I won’t say that I didn’t learn anything at all. I was going through blogs, articles or different things but couldn’t take out time to write about those. However, I realize the cause that was hindering me to write blogs. It was being perfect about what I am gonna write about.

Even though I tried to continue writing when continuing with #dgplug sessions but couldn’t continue with that habit as it ends. Maybe because, I wasn’t able to develop a habit at all. Maybe that was just a periodic habit or I just started procrastinating. I usually come across some articles or blog posts of different people which inspires me to do something. This time there was a article of one of the #dgplug folks which I came across when I was going through its students planet. This is the article written by Pradhvan. It really inspired me. I felt it to be the same what happens with me.

A small note about my learning

  • I started reading about Javascript from MDN. I was partially familiar with it but I wanted to know more.

  • I am planning to start studying DRF – Django REsT Framework in order to build skills for working with API and since I have done basics in Django, it appeals me more towards itself.

Currently, this is the only thing. Though I’ll keep updating as I learn anything new across my journey. I am writing this blog just to make a public commitment so that I don’t back off and really develop this habit.

That’s it for now. Meet you next time very soon(HOPEFULLY). Till then, Be Curious and Keep learning!

by gutsytechster at January 14, 2019 02:58 PM

January 11, 2019

Vishal Singh Kushwaha (vishalIRC)

Everything has a story

While reading a bunch of math on a piece of paper, one rarely gets enough time to contemplate on its origins. No-one is born with the answers. No-one gets handed a step by step plan, a plan which definitely leads to what one is destined to do. One’s purpose in life then, is nothing but an illusion created by society and his yearning for control.

Every once in a while, we work on something for the sake of the thrill, the fulfilment of getting the work done. Of achieving that milestone. Until the next one comes along, life has purpose.

As human beings we like listening to stories, and telling them. This is essential because our brains are capable of processing and retaining it, very well. Therefore, we must be careful about the stories we make about ourselves. You write your own story, then it is possible to make a bad draft the first couple of hundred times.

We underestimate ourselves, our abilities: I can never become an astronaut! well you never became one. I want a decent job! well you got one. You will only go as far as you think you can, or as far as your protagonist goes.

Well then, what’s your story? and is it any good? Sure hope to see you when you’ve taken the red pill.

Vishal K.

by vishyboy at January 11, 2019 10:41 PM

Piyush Aggarwal (brute4s99)

Arch

Simplicity is the ultimate sophistication.
-Leonardo da Vinci

After eons of self-doubt and mixed opinion, I finally decided to get Arch Linux up and running in my laptop!

How it all began?

My mentors at IRC insisted upon switching over to latest Linux distros. The reason was implicit: to work with packages having latest features. My IRC friends at #dgplug suggested me a few flavors to choose from- latest Ubuntu build, latest Fedora build, or a rolling release distribution.

What’s a Rolling Release?

A rolling release is a type of linux distribution model in which instead of releasing major updates to the entire operating system after a scheduled period of time, a rolling release operating system can be changed at the application level, whenever a change is pushed by upstream.

There are a couple of rolling release models – semi-rolling and full rolling – and the difference is in how and what packages are pushed out to users as they become available.

A semi-rolling distribution, such as Chakra Linux and PCLinuxOS, classifies some packages to be upgraded through a fixed release system (usually the base operating system) in order to maintain stability.

A full rolling release, such as Gentoo, Arch, OpenSUSE Tumbleweed, and Microsoft Windows 10, pushes out updates to the base operating system and other applications very frequently – sometimes as often as every few hours!

Why switch?

The main benefit to a rolling release model is the ability for the end user to use the newest features the developer has enabled. For example, one of the newer features of the Linux kernel, introduced with the 4.0 update, was the ability to update the kernel without restarting your computer. In a rolling release distribution, as soon as this update was tested and marked as working by the development team, it could then be pushed out to the user of the distribution, enabling all future updates to the kernel to occur without computer restarts.

What’s new?

For ubuntu users, it’s the same thing as if you come to Linux from Windows; there is a learning curve.

An excerpt from the Arch Wiki states thus:-

Arch Linux is a general-purpose distribution. Upon installation, only a command-line environment is provided: rather than tearing out unneeded and unwanted packages, the user is offered the ability to build a custom system by choosing among thousands of high-quality packages provided in the official repositories for the x86-64 architecture.

Oh, and one more thing- none of the proprietary softwares/packages/drivers come with the base installation. Read more about them here.

If you still think you can steer clear off the proprietary softwares , think again, and one more time.

Theory’s over.

Baby Steps

A few pointers before we start the installation:-

  1. The installation requires a working internet connection. So, I had a wired ethernet connection ready at my disposal. A WiFi module that’s NOT with a Broadcomm Chipset would be just as fine. Since I have the Broadcomm Chipset, I switched to wired connection for the time being.
  2. Once I’ll get the installation done, all I’ll have is a bare-bones installation with a log-in shell. You must absolutely be comfortable with the terminal, as almost no graphical utility comes out of the box.

I grabbed a USB and prepped it with this Arch Linux img. First thing after booting from the USB – I connected to the internet.

In case you have Broadcomm chipset in your WiFi, follow this. You need the driver firmware for the Broadcomm chipset to get it working on your laptop, since it’s proprietary.

Connecting to the Internet

Connecting to internet via Ethernet

Just Plug n Play, you’re good to go!

Connecting to internet via WiFi

1.Create a profile for your wifi in there

# wifi-menu

2.Connect to the profile you set by

# netctl start <profile_name>

3.If you want to enable it to connect automatically at startup

# netctl enable <profile_name>

Connecting to internet via Android USB tethering

1.List all your DHCP interfaces that are now available

$ ls /sys/class/net

2.Connect to the new inteface provided by Arch for your USB tethered device!

# dhcpcd <enp....something_profile_name>

Check if you’re online : $ ping -c3 google.com


There are many good tutorials out there, follow any one of these.

Now that Arch was installed, I booted up the system and got connected to the internet again.

image

Now that I was online, I set up a GUI !

Installing a GUI

So the first thing I decided to get for the Arch was GUI! It’s a quite simple procedure, you need a display manager, and a Desktop Environment to interact to the X server.

X Server

X is an application that manages one or more graphics displays and one or more input devices (keyboard, mouse, etc.) connected to the computer.
It works as a server and can run on the local computer or on another computer on the network. Services can communicate with the X server to display graphical interfaces and receive input from the user.

My choice: # pacman -S sddm plasma

IMPO ! Install a terminal before rebooting to GUI !

# pacman -S konsole

Configuring terminal

Sources + References:-

  1. http://jilles.me/badassify-your-terminal-and-shell/

Configuring weechat

Sources + References:-

  1. https://alexjj.com/blog/2016/9/setting-up-weechat/
  2. https://wiki.archlinux.org/index.php/WeeChat

Surfing through some sites also got me through a good command that would be of much help to most!

/mouse enable # In case you’d like to use the mouse in weechat

/redraw # A saviour for guys SSH-ing to any ZNC

You can’t find the packages through pacman?

Enter AUR : the Arch User Repository

Suppose I have to get a package that cannot be found by pacman. I will try to find it at AUR home page.

for eg : ngrok. Now, after reading description, I know this is the package I was looking for. So, now I will see how I can acquire the package.

Here I can see two ways to acquire the package- by git clone (preferred), or by downloading the tarball.

It gives me one file : PKGBUILD . These PKGBUILDs can be built into installable packages using makepkg , then installed using pacman .

Fakeroot

Imagine that you are a developer/package maintainer, etc. working on a remote server. You want to update the contents of a package and rebuild it, download and customize a kernel from kernel.org and build it, etc. While trying to do those things, you’ll find out that some steps require you to have root rights (UID and GID 0) for different reasons (security, overlooked permissions, etc). But it is not possible to get root rights, since you are working on a remote machine (and many other users have the same problem as you). This is what exactly fakeroot does: it pretends an effective UID and GID of 0 to the environment which requires them.
P.S:-

  • UID: User ID
  • GID: Group ID

The git clone method is preferred since you can then update the package by simply git pull.

Why so much fuss ?

You can always try out AUR helpers. I set up yay in my configuration, since it also shows DIFFs when installing new/upgrading packages through AURs.

Why would you want to read DIFFs?

Essentially, it’s a shell script,(so it can possibly have mailicious / dangerous content, so look before you leap) but since it’s ran as fakeroot, there is some level of security albeit. Still, we shouldn’t try and push our luck.

So after all this, I successfully set up Arch Linux, WiFi, Desktop Environment, Terminal and Weechat in my laptop! Next was installing basic software packages and fine tuning the GUI to my personal tastes.

Firefox Developer Edition – For Web Browsing

tor-browser – For private internet access

Konsole – Terminal

Deepin Music Player – Music Player

Gwenview – Image viewer and editing solution

Steam – for Games

Kontact – for updates on calendar events

VLC – Video player The end result

image

beautiful, isn’t it?

Setting up a personal Arch Linux machine taught me many things about the core Linux system, how exactly the system is set up during installation and how different utilities orchestrate to form my complete workstation ready to build beautiful code and software!

January 11, 2019 05:31 PM

My Testimony about Blockchain - Part 2

They’ll what ?

They’ll fork off of the network.

A byproduct of distributed consensus, forks happen anytime two miners find a block at nearly the same time. The ambiguity is resolved when subsequent blocks are added to one, making it the longest chain, while the other block gets “orphaned” (or abandoned) by the network.

But forks also can be willingly introduced to the network. This occurs when developers seek to change the rules the software uses to decide whether a transaction is valid or not. Forks can be classified into two- hard and soft forks; both have different implications for the network and ecosystem.

Hard forks are a permanent divergence in the the block chain, commonly occurs when non-upgraded nodes can’t validate blocks created by upgraded nodes that follow newer consensus rules.

Soft forks are a temporary divergence in the block chain caused by non-upgraded nodes not following new consensus rules

Miners can add blocks to the blockchain so long as every other node on the network agrees that their block fits the consensus rules and accepts it.

The Block Header

So what do these miners do exactly? They hash the block header. It is 80 bytes of data that will ultimately be hashed.

The header contains this info:

Name Byte Size Description
Version 4 Block version number
Previous Hash 32 This is the previous block header
Merkle Root 32 The hash based on all of the transactions in the block
Time 4 Current time stamp as seconds (unix format)
Bits 4 Target value in compact form
Nonce 4 User adjusted value starting from 0

genesis

A snap of the latest block at Bitcoin blockchain at the time of writing.

How would the consensus deem a mined block as accepted?

See the Bits part ? It is the Integer (Base 10) representation of the target that is to be achieved by the miners. The target is the 256 bit hash sum of the block header. It is the MAXIMUM value acceptable by the consensus for the hash.

MAXIMUM value?

I thought you’d never ask! See the nonce part in the block header? Yup, miners need to start all the way from 0 (some may try to skip values, completely up to miner) to the number that when used in the block header, yields a hash sum below the target. See the nonce in the latest block image? The miner who successfully relayed this value to the nodes received the price money ie 12.5 BTC! That’s a lot of work and indeed a lot of bucks!

People buy special hardware (recent scarcity of GPUs? Curse those miners) and even computers specially built for this purpose! Ever heard of ASICs?

As it stands, mining on your won, on your single PC is almost dead. The process of finding blocks is now so crowded and the difficulty of finding a block so high that it would take over an year to generate any coins on an average high-end gaming system. While you could simply set a machine aside and have it run the algorithms endlessly, the energy cost and equipment degradation and breakdown will eventually cost more than the actual bitcoins are worth.

Pooled mining, however, is far more lucrative. Using a service you can split the work among a ground of people. Using this equation:

(12.5 BTC + block fees – 2% fee) * (shares found by user’s workers) / (total shares in current round)

Putting it simply, it is basically how the system works. You work for shares in a block and when complete you get a percentage of the block reward based on the number of workers alongside you. More the people in pool, higher the chances of rewards.

Types of Blockchains in use

Any blockchain can be classified into any one of these categories-

Public Blockchain

The most basic of all blockchain concepts. This is the blockchain everyone uses out there.

The most basic features of this bockchain are –

Anyone can run a BTC/LTC full node and start mining.
Anyone can make transactions on the BTC/LTC chain.
Anyone can review/audit the blockchain in a Blockchain explorer.

Example: Bitcoin, Litecoin etc.

Private Blockchain

Private blockchain as its name suggests is a private property of an individual or an organization. Unlike public blockchain, here there is actually someone in charge who looks after important things such as read/write or whom to selectively give access to read or vice versa. Here the consensus is achieved on the whims of the central authority who can give mining rights to anyone or not at all!

Example: Bankchain

Consortium Blockchain

This type of blockchain tries to remove the sole autonomy which gets vested in just one entity by using private blockchains.

So here you have multiple authorities instead of just one. Basically, you have a group of companies or representative individuals coming together and making decisions for the benefit of the whole network. Such groups are also called consortiums or a federation; ergo the name consortium or federated blockchain.

For example, let’s suppose you have a consortium of world’s top 20 financial institutes out of which you could decide that if a transaction or block is voted/verified by more than 15 institutions, only then does it get added to the blockchain.

Example: r3, EWF

In fact, the idea that cryptographic keys and shared ledgers can incentivize users to secure and formalize digital relationships has imaginations running wild. Everyone from governments to IT firms to banks is seeking to build this transaction layer.

Authentication and authorization, vital to digital transactions, are established as a result of the configuration of blockchain technology. The idea can be applied to any need for a trustworthy system of record.

January 11, 2019 05:31 PM

It's a blog !

his image

This is the first post that comes with the blog by default.

Let’s see.

I made a blog.

Let’s try our best to make it useful, yeah ?

I don’t wish you all to be watching ads with my blog, so just wait for a while!

Good company in a journey makes the way seem shorter.
— Izaak Walton

January 11, 2019 05:31 PM

My Testimony about Blockchain - Part 1

Blockchain is a vast, global distributed ledger or database running on millions of devices and open to anyone, where not just information but anything of value — money, but also titles, deeds, identities, even votes — can be moved, stored and managed securely and privately. Trust is established through mass collaboration and clever code rather than by powerful intermediaries like governments and banks.
–Wikinomics

So I’ve been reading all about blockchains (even those 12 point font research papers!). This is a rough gist of what I learnt:-

A distributed ledger

Wikipedia explains thus –

“A distributed ledger is a consensus of replicated, shared, and synchronised digital data geographically spread across multiple sites, countries, or institutions. There is no central administrator or centralised data storage.”

This seems too much condensed. Let me break it down for you.

  • There is no central authority.
  • Every transaction here occurs in front of an array of guards that maintain order and make sure the transactions are completed in full by both parties.
  • These guards are just some computers that have volunteered to become a ‘node’. Only these nodes can validate the transactions of every user on a blockchain.

Before we go any further, I need to tell you what a transaction means in this context.

A transaction occurs when there is an exchange of data between any two parties. It need not be money only. It can be any data, you can even make a deal involving official papers of properties through some blockchain implementing platform!

And if this sounds scary, don’t worry; no-one, not even those nodes (the ones which supervise the transactions) know what exactly you exchanged! Kudos to privacy! And that’s not even half of it! I’ll explain more later. conventional central authority Consider the conventional case of a bank (a conventional central authority).

NOTE : We are using ‘bank’ as an example just because it comprises a good amount of ‘transactions’. Always remember that these ‘transactions’ can be of data or goods too!

So here, in a bank, all the transactions occurring between accounts would be verified by a single, central authority, and all your possessions currently with the bank would be at the mercy of the whims of the bank, the single point of security in the transaction. If, by any chance, the bank burns down (physical damage to central authority) or gets robbed (or hacked), or seizes your account (unethically or otherwise) , there would be consequences, the likes of which you most probably won’t be comfortable with.

Enter blockchain with the power of consensus based distributed ledger! If we consider the case of bitcoin blockchain, there are about 7000 nodes in the network that all work for the security of all those precious bitcoins that keep soaring and falling by the minute. For bitcoin to fail, all these 7000 points of security would have to be attacked at the same time, or at-least half of them. Not only that, with the sky-high pricing of these virtual currencies, more and more people are opting in to become nodes, which adds to security of the users(traders) making transactions over bitcoin blockchain. So that’s security for you and the ‘things’ you love! If you wish to know more about blockchain that deals with data, check out ethereum. Ethereum is an open-source, public, blockchain-based distributed computing platform and operating system featuring smart contract functionality.

Block

A block is the ‘current’ part of a blockchain, which records some or all of the recent transactions. Once completed, a block goes into the blockchain as a permanent database. Each time a block gets completed, a new one is generated. There are countless such blocks in the blockchain, connected to each other (like links in a chain) in proper linear, chronological order. Every block contains a hash of the previous block. The blockchain has complete information about different user addresses and their balances right from the genesis block to the most recently completed block. Every node on the blockchain has a copy of the ledger with themselves, that gets synced after creation of a new block.

The ‘what’ block ?

Every blockchain has to start somewhere, so there’s what’s called a genesis block at the beginning. This is the first block, and there, at the beginning, the creators of Ethereum (or any other cryptocurrency) were at liberty to say “To start, the following accounts all have X units of my cryptocurrency.” Any transfer of data on the blockchain will have originated from one of these initial accounts (or from mining).

The blockchain was designed so these transactions are immutable, meaning they cannot be deleted. The blocks are added through cryptography (more, later), ensuring that they remain meddle-proof: The data can be distributed, but not copied (a node never knows exactly what’s in these transactions). You can always see a block yourself by using a Blockchain Explorer.

Privacy – how?

The blockchain isn’t just a bunch of computers watching that A sent something to B in return for some data; it’s so much more than that! On-chain transactions refer to those cryptocurrency transactions which occur on the blockchain – that is, on the records of the blockchain – and remain dependent on the state of the blockchain for their validity. All such on-chain transactions occur and are considered to be valid only when the blockchain is modified to reflect these transactions on the public ledger records.

What the crypto?!

So how does cryptography exactly fit in with this blockchain? It’s simple- the nodes lock the data with a 256 bit number (Hash Sum) that represents the data within a block. A different blockchain may use a different hash function, but the basic idea of its integration in the blockchain remains the same (more or less).

Hashing Functions

A basic idea of any hash function.

source

If you look closely, you’ll notice even a slight change (even just 1 bit) in the data would create a different hash sum altogether. There is simply no pattern at all!

So here comes the answer to a question that might’ve struck you-

Why would anyone waste her/his own electricity and compute power to validate my transactions? Social service? Repentance out of guilt?

It’s MONEY!

There are nodes, there are traders, then there are MINERS.

Miners are a subset of nodes as all miners must be running a full node (ie they must have complete ledger with themselves) in order to mine (at least to mine properly). The nodes are what determine consensus as all nodes must agree to the same rules otherwise the nodes will fork off of the network.

Continue Reading

January 11, 2019 05:31 PM

recovering Arch from hell

Rebuilding an Arch

easier than it looks

PROBLEM

Not clear, but looks like misconfigured packages after multiple installations, uninstallations and re-installations of packages and Desktop Environments

PROLOGUE

So today I had problems that caused KDE Plasma to not acknowledge my laptop as a laptop. In other words, my Arch was on the edge of collapse.

BABY STEPS

So, I tried reinstalling all the packages of my installation in one command, like so

# pacman -Qenq | sudo pacman -S -

But as you can see the post hasn’t ended here, it didn’t pan out.

SOLUTION

After hours of help at #archlinux and #kde-plasma, I found this Forum page that gave me just the right instructions!

  1. First up, I removed all the orphaned/unused packages rotting away in my system.

    # pacman -Rns $(pacman -Qtdq)
  2. next, I force-reinstalled all the packages I had in my installation.

    # pacman -Qqen > pkglist.txt
    # pacman --force -S $(< pkglist.txt)

EPILOGUE

Now my installation is sweet as candy with no loss of any personal configs, and everything is perfect again!

😄 🎉

January 11, 2019 05:31 PM

abc of unix

UNIX

A is for awk, which runs like a snail, and
B is for biff, which reads all your mail.
C is for cc, as hackers recall, while
D is for dd, the command that does all.
E is for emacs, which rebinds your keys, and
F is for fsck, which rebuilds your trees.
G is for grep, a clever detective, while
H is for halt, which may seem defective.
I is for indent, which rarely amuses, and
J is for join, which nobody uses.
K is for kill, which makes you the boss, while
L is for lex, which is missing from DOS.
M is for more, from which less was begot, and
N is for nice, which it really is not.
O is for od, which prints out things nice, while
P is for passwd, which reads in strings twice.
Q is for quota, a Berkeley-type fable, and
R is for ranlib, for sorting ar table.
S is for spell, which attempts to belittle, while
T is for true, which does very little.
U is for uniq, which is used after sort, and
V is for vi, which is hard to abort.
W is for whoami, which tells you your name, while
X is, well, X, of dubious fame.
Y is for yes, which makes an impression, and
Z is for zcat, which handles compression.

— THE ABCs OF UNIX

January 11, 2019 05:31 PM

Blurred Lines

PROLOGUE

While setting up Dev environment on my CAIRO-STATION (desktop computer at my home), I realized I could not install Linux on that, since the system will be used by all family members. My best bet would have been a VM or some sort of Containerization. Then I recalled my early development days, and realized both of these are inferior to Windows Subsystem For Linux (WSL)

When I first discovered WSL as an optional feature in Windows 8.1, I was busy jumping between playing Just Cause 2 (a really great Open-World game, you MUST check it out!) and studying for XII “Board Exams”. I had a slight taste of Linux back then, enough to perform the most basic functions- ls, cd, and screenfetch (my favorite).

Then, last year I saw Microsoft announce 3 more Linux flavors for WSL incoming at Build Developer Conference, all I understood was more screenfetch outputs to bask in!

Motivation

Today, after an year of experience and two incredibly knowledgeable months at DGPLUG, ideas have become more achievable.

Today, an idea struck my mind-

Developers can finally use Ubuntu through command line interface, great! If they could also use GUI apps fired from within the Ubuntu bash CLI, ah that would have been lovely.

Ever since I installed Arch Linux on my system over days of research, I came to appreciate all the nit-bits and procedures involved in installing an OS and everything that a proprietary-software user believes should come with it.

Since now I possessed the knowledge I needed to pull this off, I fired up my home PC and I was ready to hack!

Baby Steps

  1. Got Ubuntu 18.04 from windows store.
  2. Turned ON WSL for my system.
  3. Updated all packages and installed screenfetch.

The New Part

The X Server handles outputs to the GUI, and a variable DISPLAY needs to point to the X Server. This was done by the command DISPLAY=:0. To avoid running it manually everytime, I appended it to my shell by the command: echo 'DISPLAY=:0' >> ~/.zshrc.

Now, all that I needed was an X Server that serves well!

My first attempt at X-Server for Windows was XMing, but unfortunately it couldn’t be detected by the WSL. XMINGnotDetected

X Server could not be found by WSL

Next up, I tried another procedure which goes as follows :-

  1. WSL opens up a TCP type port 2222 for SSHing.
  2. I SSH through PuTTY and enable X11 Forwarding inside it.
  3. The X Server used was still Xming.

The result is in the following photograph. XMingDetected

Some Progress!

There still were problems with this set up- Latency. It seems obvious that I am SSHing into my own system, which is not wise. So, now I decided to get a better terminal application and get this show on road!

So, this time, I installed ConEmu. For those who are having a hard time shifting from Linux to the Command Prompt or Powershell, this is a relief. ConEmu is extremely customizable and rock-solid!

Also, I changed my X Server to MobaXterm, which does a far better and simpler job at handling X Server and related tasks(Servers,Tunneling,Packages,File System )

Final Set Up

The MobaXTerm X Server starts at Log In, and firing up ConEmu gives me a Ubuntu CLI.

Testing

XMingDetected

Mozilla Firefox

XMingDetected2

VS Code

I also followed a blog post by Nick Janetakis on setting up Docker to work with WSL flawlessly!

The setup he used was my inspiration for the post, and I hope it would serve me well for my oncoming endeavors!

January 11, 2019 05:31 PM

A real life Hogwarts

Skimming through your seniors’ profile does some good at times!

“The programmers of tomorrow are the wizards of the future !”
– Gabe, Co-founder-Valve

dgplug

LINUX USERS’ GROUP OF DURGAPUR
Learn and teach others

An excerpt from the official site :-

Objectives

  • Revisiting programming fundamentals
  • Get acquainted with Free Software technologies
  • Spreading the hacker ethics
  • Gaining technical knowledge -Real-world project experience

What I have learnt within the month at #dgplug online summer training is invaluable to me! We get to talk and learn from the Jedi of F/OSS, attend Guest sessions with international upstream contributors and so much more!

An excerpt from a qoura answer :-

How is the summer training at Dgplug?

For me, it was like Hogwarts, a place which normal people don’t know, yet full of surprises, and new learning! It opened a whole new world for me!
-Avik Mukherjee

And frankly, that makes the two of us.

January 11, 2019 05:31 PM

a better blog

his image Hi Greg! I saw your Github profile had this starter for a personal blog, so I thought

Why not!?

After days of scouring the internet, I finally landed on GatsbyJS starters, because

  1. There were seriously NO good themes for Nikola.
  2. I didn’t like the UI provided by Pelican.
  3. Gatsby looked good enough to be my blog :tongue:
  4. I did not wish to learn Front-end for the next month to make the Gatsby site out of documentation!
  5. Greg has made a masterpiece out of all the techs in the left bottom of the start page!

When I started looking for themes beforehand in GatsbyJS starters, and landed on your GitHub, I was elated only to find out you had the perfect fit waiting to be found!

Next up, I will be moving all my posts from Wordpress over here and publishing some new ones soon.

January 11, 2019 05:31 PM

January 04, 2019

Kuntal Majumder (hellozee)

It is New Year my dudes

If you don’t get the meme reference from the title, here is one more for you : Enough of memes, lets talk about something trendy, something that everyone is talking about cause January 2019 is all about setting up goals and resolutions, I am not being punny here.

January 04, 2019 06:01 PM

It is New Year my dudes

If you don’t get the meme reference from the title, here is one more for you : Enough of memes, lets talk about something trendy, something that everyone is talking about cause January 2019 is all about setting up goals and resolutions, I am not being punny here.

by hellozee at disroot.org (hellozee) at January 04, 2019 06:01 PM

December 24, 2018

Kuntal Majumder (hellozee)

Flashback 2018

tl;dr, Typical end of the year post as you may expect. Well, 2018 was a significantly productive year for me compared to other years, learned so many things which I wanted to, plus added more things to the bucket list for the coming year.

December 24, 2018 11:19 AM

Flashback 2018

tl;dr, Typical end of the year post as you may expect. Well, 2018 was a significantly productive year for me compared to other years, learned so many things which I wanted to, plus added more things to the bucket list for the coming year.

by hellozee at disroot.org (hellozee) at December 24, 2018 11:19 AM

December 21, 2018

Kuntal Majumder (hellozee)

How to Learn

If you search the web for how to learn something, you will surely get a bunch of techniques that would help you to remember something but that is not the learning I am talking about, that is in a sense, a kind of memorization.

December 21, 2018 03:09 AM

How to Learn

If you search the web for how to learn something, you will surely get a bunch of techniques that would help you to remember something but that is not the learning I am talking about, that is in a sense, a kind of memorization.

by hellozee at disroot.org (hellozee) at December 21, 2018 03:09 AM

December 19, 2018

Pradhvan Bisht (pradhvan)

Memory Managment in Python – Part 2

In the last part, we checked out how variables are stored in Python and how Python handles memory management with reference counts and garbage collector. If you haven’t checked it out and want to here is the link.

In this part, we will dig deep a little on how reference counting works and how it can increase or decrease taking in account the different cases. So let’s start where we left off, every object in python has three things

  1. Type
  2. Reference Count
  3. Value

Reference Count is a value showing how much an object has been referred(pointed) too by other names(variables). Reference counting helps the garbage collector in freeing up space so the program can run efficiently. Though we can increase or decrease the value of the reference count and can check the value with the inbuilt function called getrefcount().

Let’s take a small code snippet:

import sys

a = []

# Two reference one from the variable and one from the getrefcount() function

print(sys.getrefcount())

2

Though the examples look great and everything seems to be working but I did kinda trick you, first thing is that not all reference count values start from 0 so if you do the same example with a different value of the output may be different. Reference count values are calculated on two factors number of times the object is used in the bytecode and the number of time it’s been referenced to this includes your previous programs too.

Let’s look into another example:

import sys

a = 100

print(sys.getrefcount(a))

4

b = 100

print(sys.getrefcount(b))

5

c = 100

print(sys.getrefcount(c))

6

When more variables reference to the same value the refrence count increase. But this is not the case when we take into example the case of container objects like lists and constants.

import sys

ex_list = [a,b,c,d]

print(sys.getrefcount(a))

8

print(sys.getrefcount(b))

9

print(sys.getrefcount(c))

10

print(sys.getrefcount(d))

11

del ex_list

print(sys.getrefcount(a))

7

print(sys.getrefcount(b))

8

print(sys.getrefcount(c))

9

print(sys.getrefcount(d))

10

# Same thing goes with constants

print(sys.getrefcount(10))

12

const = 10

print(sys.getrefcount(10))

13

const = const + 10

print(sys.getrefcount(10))

12

As we saw container objects here list refer to other objects referring to the value thus when we delete them the reference link is deleted thus objects inside the list decrease the reference count by one. The same happens with constants too, when the variable they get referenced to is incremented the reference count is decremented.

By now you must have realized that del does not actually delete the object on the contrary it removes that variable(name) as a reference to that object and decrease the reference count by one.

All the example we saw are kinda similar considering the fact that they are in the global namespace but what happens when there are function what happens to the reference count then, let’s find out through this code snippet

import sys

num = 100

print(sys.getrefcount(num))

4

def ytf(number):

print(sys.getrefcount(num))

ytf(num)

6

print(sys.getrefcount(num))

4

We saw that when ytf() got into scope the reference count increased while the reference count decreased when the function got out of scope. Keeping this in mind that we should be careful of using large or complex objects in the global namespace because an object in a global namespace don’t go out of scope unless we decrease the value of the reference count thus a large object would consume more memory making the program less efficient.

That’s all for this part, in the next part we would look closely into the garbage collector and how it functions inside a python program in freeing up memory.

 

 

by Pradhvan Bisht at December 19, 2018 12:20 PM

December 16, 2018

Pradhvan Bisht (pradhvan)

Memory Managment in Python – Part 1

Stumbling upon a Python code snippet from a GitHub repo I did come to realize that in Python variables don’t actually store values they are assigned too, variables actually store the location to the value. Unlike in C++/C which actually creates a space of a fixed size and assigns it to a variable created which we usually call a bucket/room while explaining variables to a beginner “what variables are?” in programming.

Thought python variables are a bit different they work like keys which points to a particular room in the hotel(memory space) so whenever we make an assignment to available we are not creating rooms rather creating keys to that room which is freed/overwritten by the python’s garbage collector automatically. (More on the topic of garbage collector later). So the point is when ever we do something like

a= 10

b = 10

id(a)

94268504788576

id(b)

94268504788576

We are optimizing here and creating two same keys which points to the same room in the hotel (memory) thus they would have the same id but this kind of optimization works only with the range of integers from -5 to 256 if you exceed the range, variables would point to two different storages thus will have different id().

Just don’t get confused why we did not use “==” instead used id(), == checks whether the value pointed by the variables are same or not inside the object and id() checks if it uses the same object or not because every object has a unique identity which can be checked by the id() function.

Following the official docs id(), the value is the address of the object in the memory.

Coming back to the code snippet from the GitHub repo given below and applying the same knowledge of integers to strings.

a = "wtf"

b = "wtf"

id(a),id(b)

(139942771029080, 139942771029080)

a = "wtf!"

b = "wtf!"

id(a),id(b)

(139942771029192, 139942771029136)

a= "hello world this is a string"

b= "hello world this is a string"

id(a),id(b)

(139942770977328, 139942770977408)

The same kind of optimization happens here too when the strings are small they refer to the same object in the memory rather creating a new one thus saving memory, this kind is called interning. But when the object becomes bigger or contains ASCII letters, digits or underscore they are not interned.

This shows abstraction at it’s best and python is very good at it, it does all the heavy lifting job for you of allocating/deallocating memory for you and lets you focus on the other parts of the program. Until you really want to know what’s happening and I assume you do that’s why you are reading this blog 😛

Though this explanation was already available on the repo. I wanted to know more about how memory management happens internally so I stumbled about a talk of “ Memory Managment in Python – The basics” by nnja. So yeah people with nicks like ninja are great with Python and Fortnite hahaha! (I could not resist myself from posting this joke and just to clear things out ninja is one of the top Fortnite players)

Thus if you see technically Python does not have variable but  ‘names’ which refers to other objects or names and Python likes to keep a count of all the references called as reference count of all the object’s references. So if the reference count of an object decreases to zero this means no reference is made to that object which as seen by the Garbage collector of python as free space thus the object is deleted and the space is free to use.

We as a programmer can increase or decrease the reference count of an object as a python object stores three things:

  • Type: int,string,float
  • Reference Count
  • Value

Seeing the first code snippet of the blog where two names a and b in which both names points to the same object with the value 10 with the reference count of 1 and type as int.

That’s all for this part, will cover the same topics in a bit detail in the next part. Still confused in some of the things so keeping this plain and simple for future me to look back when I am lost and can’t even notice the simple things 😛 and for someone who just wants to know this briefly.

by Pradhvan Bisht at December 16, 2018 01:16 PM

December 11, 2018

Pradhvan Bisht (pradhvan)

REST APIs with DJANGO

I recently finished REST APIs with DJANGO by William S. Vincent it’s not lengthy book hardly 190 pages but does pack a lot of information if you are just starting out with building APIs with Django and REST API in particular.

It’s well written so it’s easy to understand and takes into notice that you have just started out with Django though this could be a bit frustrating to read if you have been making some apps in Django because it explains a lot of basics concepts that I assume most of the readers would know about. It uses Django 2.1 and uses Pipenv for virtual environment instead of venv so that was new 😛

tl;dr A lightweight and simple book that packs a lot if you are just starting out with REST APIs.

I picked up this book because I wanted to work on Django Rest Framework and while reading a blog from the same author I noticed his book at the end. I liked the blog and did a quick search to check out reviews on the book, the reviews were positive so I bought the book for my Kindel.

Screenshot from 2018-12-11 21-23-49

The book mainly revolves around three projects that are covered in the nine chapters of the book though eight I should say as the first chapter talks about the basics of World Wide Web, IPs, URL,API, endpoints,HTTP and ends with explaining what REST APIs are. Well, the take away was that REST is just an architecture approach of building APIs  and REST API at minimum follows these three principles:

1. It’s stateless

2. It supports GET,POST,PUT,DELETE(HTTP Verbs)

3.Returns data in either JSON or XML format

So where does Django REST framework comes into play you ask? It’s simple ! it creates API which contains all the HTTP Verbs that return JSON.

One more thing not to confuse you and all, just clearing things up 😛 Django creates websites containing web pages and Django REST framework creates web API both are two different separate frameworks.  Yes! both can be used simultaneously in a single web app.

Now that I have flaunted my newly acquired knowledge let’s move forward with this review. haha!

The book helps you build three different projects a very basic library website API this is the first project in the book so it’s just to get you all set with the process but does an important job of helping you distinguish between Django framework and Django Rest Framework.

The second one is a ToDo API with the React front end though I think it’s just put in the book to either get you to use to of making REST APIs which can be repetitive at times or just to get a beginner programmer to think that ” oh ! covers react too, nice !” (bait). If you skip the chapter nothing would happen.  For those of you wondering I did not skip the chapter, I had to write this fancy review #dedication, haha!

The project that you would get the most out of is the last one and it’s the most basic thing that every Django developer builds when he/she starts out learning about Django. You guessed it right, a blog website so this book helps you build Django API

The whole project is spread around 5 chapter and broadly covers user permissions, user authentication, viewsets, routers, and schemas. It gives you enough understanding that you can look up Django REST Framework documentation with an ease plus I think author took this Blog API project in particular because it would be a nice if a beginner who had started with the Django Girls tutorial can make those changes in that particular project so he would get even better understanding with something to work on by himself. Which I would highly recommend doing and would be doing now.

I would rate this book 3.9/5.

A must buy if you are beginner in Django or just starting out making REST APIs or even both . If you have a decent experience with Django and know your internet jargons well I would suggest going with the official documentation.

 

by Pradhvan Bisht at December 11, 2018 05:19 PM

December 10, 2018

Kuntal Majumder (hellozee)

Recreating the Marvel Intro with Python and Nuke

Let me start this one with a story. Once there was a kid who loved to play games, after a while, he wanted to make games of his own and tada a programmer was born.

December 10, 2018 09:37 AM

Recreating the Marvel Intro with Python and Nuke

Let me start this one with a story. Once there was a kid who loved to play games, after a while, he wanted to make games of his own and tada a programmer was born.

by hellozee at disroot.org (hellozee) at December 10, 2018 09:37 AM

December 07, 2018

Pradhvan Bisht (pradhvan)

To start, just start !

It’s almost end of 2018 and most of the people start working on their new year’s goals, well I am doing that too a bit early I guess 😛

I usually plan to write A LOT but all the time what I have seen is that once I start a blog and work on it, I give up on it either mid-way or sometimes it doesn’t reach the half-way mark the reason being I want it to be just perfect. What I mean to say it that I write blogs with the uttermost detail like the last blog that I start and could not publish was of the PyCon India 2018. I wrote a lot in it and gave almost every good detail I remembered from the event but like most of the blog, it could only make it to the halfway mark.

I don’t know what it is either I think I have a massive audience that eagerly waits for my blog or I wanna be that kid that writes awesome blog everytime he sits on the computer to write one. Whatever it is! I wanna change that. I want to write blogs frequently. So while finding a solution to this problem I remembered some lines from a podcast  CodeNewbie, in which Julia Evans was the guest. (she writes awesome blogs, go check them out if you haven’t ) Julia mentions that while she was in Recurse Centre she picked up this technique of writing small bit of blogs every day but the problem was she had a lot going on during that time so she didn’t use to get much time to write but she managed to that by writing consistently without thinking about much of the size of the content or thinking of writing just the perfect blog.

The thing I took from the conversation was to write frequently without worrying much about the factor and it’s not like many people read my blog so they would just find me to insult about my poor blogs, haha!  Only a handful of good people from #dgplug read it 🙂  so I got nothing to lose.

Things I would follow from now it:

  1. Keep the blog short and crisp 400 words or less
  2. Blog every second day, I do read a lot these days mostly tech so I can put those notes up in the form of blogs that would help me in the future too.

So yeah hopefully you will be seeing a lot of my blog(mostly bad in the starting so I apologize in advance) from now 😛

by Pradhvan Bisht at December 07, 2018 02:30 PM

November 29, 2018

Aman Verma (nightwarriorxxx)

Learning to talk to computers with python

“Patience and perseverance have a magical effect before which difficulties disappear and obstacles vanish.”

-John Quincy Adam

Starting from new ,aiming to be more consistent this time I joined Operation Blue Moon today (initiative by @mbuf). The aim of mine of Learning to talk to computers will be tough I know but I also know I have to push my limits rather than just sitting inside my comfort zone. Hoping everything will be fine and will try to stay as much positive and focused as I can.

Coming to some cool stuff I learned today whose all credits goes to @kushaldas(the insane).

Command prompt from python

 
#! /usr/bin/env python3
from cmd2 import Cmd
class repl(Cmd):
    def __init__(self):
        Cmd.__init__(self)

if __name__=='__main__':
    app=repl()
    app.cmdloop()

Save the script and run it. Use the command line command with

!

.
Example

!ls,!pwd

.

Tip of the day:-

Never use `pip` with `sudo`

Happy Hacking

by nightwarrior-xxx at November 29, 2018 06:30 PM

Sehenaz Parvin

Can we?

Can we please stop using filter on our photos? Can we just stop thinking ourselves a sheet of white paper with no marks? Can we please stop from doing what we don’t want? Can we please stop caring others ? Can we please stop ourselves from behavioral sciences? Can we please stop insulting others ? Can we stop demotivating others? Can we stop judging each others personal views? Can we stop telling what to wear ? Can we stop telling how to live life? Can we stop telling what to choose? Can we stop advising ? Can we stop hating others? Can we stop our so called show-off in public? Can we please stop blaming others? Can we please stop imposing others? Can we please stop !!!!!! Can we please come out of this trap!!! Can we wait a second!!!

Can we be “us” for a minute ? Can we be “me” for a minute? Can we please stop blaming ourselves for everything? Can we please use our makeup for looking prettier not shitier ? Can we please use normal filter one day? Can we please think normal and do what we want? Can we please stop off- showing why we are not? Can we please not take ourselves granted for. Minute? Can we please think about “me” for a minute? Can we stop elbowing each other in the want of limelight? Can we stop thinking about what people will think and judge us? Can we stop changing ourselves for others?

Can we? Please women it’s high time now. Stop pretending! Start perceiving! Start convincing ! We all are beautiful souls. Don’t ditch yourself , your efforts , your dreams and your soul to petty foggers . We use make up not to look prettier but to stop ourselves from looking shitier! This is not done! We fear to click Photos using a normal filter. Why? For whom? And for what? We should not look sexy for getting likes or life partner. We should not apps for finding our life partner! Take a break from this biased society. Don’t live your life according to them. Don’t do something which is in trend . We are all different and that’s what makes us special. We are special already. We just need to recognise ourselves. Live your dream! That’s what is gonna make “you “happy.

I hope you all will agree with me. We all already know this but we don’t do this! Start believing in “you”. The “you” world is very beautiful.💕 Get out of the reel and face the real. That’s gonna be a revolutionary transformation.

by potters6 at November 29, 2018 05:47 PM

November 21, 2018

Ashish Kumar Mishra (ash_mishra)

Hackman 2k18

Hackman 2k18 was the 3rd version of Hackman, an intercollege 24 hour open-theme hackathon organised by the Department of Information Science and Engineering, Dayananda Sagar College of Engineering, Bangalore.

I have been a part of Hackman since the beginning. In the first Hackman, I was just a participant. I was in my second year and I did not know anything about the technologies that were trending. I was breaking my head with HTML and CSS and didn’t even know anything about the back-end systems and databases. But the things I learnt at the first Hackman were amazing. I was mesmerised and shocked to see so many things that were happening around me and I was completely unaware of it. I still remember Farhaan taking a Git session at Hackman and I walked out in the middle of it because I could not figure out what he was talking about. I regret doing that till date.

In the second version of Hackman, Hackman 2.0, I was in the core team organizing the event and I looked over the finances. This was the very first time I was a part of organizing something this big in college. But, with the help of my seniors and with sheer dedication, we were able to pull off a great event, even bigger than last time. Hackman 2.0 taught me how to deal with people which turned out to be a lot more difficult than dealing with computers. My seniors taught me a lot and helped me deal with the unexpected situations that occurred throughout the course of the event. Doing all this was very new for me and I enjoyed it a lot. Even this time, the technology mesmerised me, and I was left gaping at the things the contestants were doing. But, I realized that it was okay. It was just the lack of my knowledge that left me in a shock and I became more determined to learn new things and apply them. I was very happy being a part of Hackman 2.0 and when it ended, I was determined to organize the next Hackman bigger and better.

The time for the next Hackman, Hackman 2k18 came and I was ready for it. I had both the experience, of a participant and that of an organiser. I knew how to plan for the event and what to do when the plan fails. Throughout the preparation and the execution of the event, I had constant support of Farhaan Bukhsh, Abhinav Jha, Devesh Verma, Sudhanva MG, Ashish Pandey, Abhishek Agarwal, Goutham ML and all my seniors who are as much connected to Hackman as me. Some of the things I learnt in this Hackman was the amount of planning required in organizing the event is way too much. Getting sponsors for the event should be the first and foremost priority. We also had a staircase meeting at 5:00 am in the morning, during the event, where Farhaan, Abhinav and Saptak talked about the new things which could be done next year. It was amazing listening to the mentors about the ideas they had in mind for the next Hackmans to make it the greatest hackathon in Bangalore.

When I look back to the first Hackman, I still cannot believe how far I have come. From being a timid contestant who did not know anything about what to do in a hackathon, to handling the whole Hackman team as the Event Manager. The only thing that matters is how much are you willing to dedicate yourself towards something and how much are you willing to learn no matter what. This event taught me so many things in life, be it technical or non-technical that I will forever be grateful for it. I would love to see the next Hackman as bigger and better than what we organised and maybe someday it will be the biggest hackathon in Bangalore. #WeAreHackman

by Ashish Kumar Mishra at November 21, 2018 11:56 AM

October 29, 2018

Anu Kumari Gupta (ann)

Enjoy octobers with Hacktoberfest

I know what you are going to do this October. Scratching your head already? No, don’t do it because I will be explaining you in details all that you can do to make this october a remarkable one, by participating in Hacktoberfest.

Guessing what is the buzz of Hacktoberfest all around? 🤔

Hacktoberfest is like a festival celebrated by people of open source community, that runs throughout the month. It is the celebration of open source software, and welcomes everyone irrespective of the knowledge they have of open source to participate and make their contribution.

  • Hacktoberfest is open to everyone in our global community!
  • Five quality pull requests must be submitted to public GitHub repositories.
  • You can sign up anytime between October 1 and October 31.

<<<<Oh NO! STOP! Hacktoberfest site defines it all. Enough! Get me to the point.>>>>

Already had enough of the rules and regulations and still wondering what is it all about, why to do and how to get started? Welcome to the right place. This hacktoberfest is centering a lot around open source. What is it? Get your answer.

What is open source?

If you are stuck in the name of open source itself, don’t worry, it’s nothing other than the phrase ‘open source’ mean. Open source refers to the availability of source code of a project, work, software, etc to everyone so that others can see, modify changes to it that can be beneficial to the project, share it, download it for use. The main aim of doing so is to maintain transparency, collaborative participation, the overall development and maintenance of the work and it is highly used for its re-distributive nature. With open source, you can organize events and schedule your plans and host it onto an open source platform as well. And the changes that you make into other’s work is termed as contribution. The contribution do not necessarily have to be the core code. It can be anything you like- designing, organizing, documentation, projects of your liking, etc.

Why should I participate?

The reason you should is you get to learn, grow, and eventually develop skills. When you make your work public, it becomes helpful to you because others analyze your work and give you valuable feedback through comments and letting you know through issues. The kind of work you do makes you recognized among others. By participating in an active contribution, you also find mentors who can guide you through the project, that helps you in the long run.

And did I tell you, you get T-shirts for contributing? Hacktoberfest allows you to win a T-shirt by making at least 5 contributions. Maybe this is motivating enough to start, right? 😛 Time to enter into Open Source World.

How to enter into the open source world?

All you need is “Git” and understanding of how to use it. If you are a beginner and don’t know how to start or have difficulty in starting off, refer this “Hello Git” before moving further. The article shows the basic understanding of Git and how to push your code through Git to make it available to everyone. Understanding is much more essential, so take your time in going through it and understanding the concept. If you are good to go, you are now ready to make contribution to other’s work.

Steps to contribute:

Step 1; You should have a github account.

Refer to the post “Hello Git“, if you have not already. The idea there is the basic understanding of git workflow and creating your first repository (your own piece of work).

Step 2: Choose a project.

I know choosing a project is a bit confusing. It seems overwhelming at first, but trust me once you get the insights of working, you will feel proud of yourself. If you are a beginner, I would recommend you to first understand the process by making small changes like correcting mistakes in a README file or adding your name to the contributors list. As I already mention, not every contributions are into coding. Select whatever you like and you feel that you can make changes, which will improve the current piece of work.

There are numerous beginner friendly as well as cool projects that you will see labelled as hacktoberfest. Pick one of your choice. Once you are done with selecting a project, get into the project and follow the rest.

Step 3: Fork the project.

You will come across several similar posts where they will give instructions to you and what you need to perform to get to the objective, but most important is that you understand what you are doing and why you are doing. Here am I, to explain you, why exactly you need to perform these commands and what does these terms mean.

Fork means to create a copy of someone else’s repository and add it to your own github account. By forking, you are making a copy of the forked project for yourself to make changes into it. The reason why we are doing so, is that you would not might like to make changes to the main repository. The changes you make has to be with you until you finalize it to commit and let the owner of the project know about it.

You must be able to see the fork option somewhere at the top right.

screenshot-from-2018-10-29-22-10-36.png

Do you see the number beside it. These are the number of forks done to this repository. Click on the fork option and you see it forking as:

Screenshot from 2018-10-29 22-45-09

Notice the change in the URL. You will see it is added in your account. Now you have the copy of the project.

Step 4: Clone the repository

What cloning is? It is actually downloading the repository so that you make it available in your desktop to make changes. Now that you have the project in hand, you are ready to amend changes that you feel necessary. It is now on your desktop and you know how to edit with the help of necessary tools and application on your desktop.

“clone or download” written in green button shows you a link and another option to directly download.

If you have git installed on your machine, you can perform commands to clone it as:

git clone "copied url"

copied url is the url shown available to you for copying it.

Step 5: Create a branch.

Branching is like the several directory you have in your computer. Each branch has the different version of the changes you make. It is essential because you will be able to track the changes you made by creating branches.

To perform operation in your machine, all you need is change to the repository directory on your computer.

 cd  <project name>

Now create a branch using the git checkout command:

git checkout -b 

Branch name is the name given by you. It can be any name of your choice, but relatable.

Step 6: Make changes and commit

If you list all the files and subdirectories with the help of ls command, your next step is to find the file or directory in which you have to make the changes and do the necessary changes. For example. if you have to update the README file, you will need an editor to open the file and write onto it. After you are done updating, you are ready for the next step.

Step 7: Push changes

Now you would want these changes to be uploaded to the place from where it came. So, the phrase that is used is that you “push changes”. It is done because after the work i.e., the improvements to the project, you will be willing to let it be known to the owner or the creator of the project.

so to push changes, you perform as follows:

git push origin 

You can reference the URL easily (by default its origin). You can alternatively use any shortname in place of origin, but you have to use the same in the next step as well.

Step 8: Create a pull request

If you go to the repository on Github, you will see information about your updates and beside that you will see “Compare and pull request” option. This is the request made to the creator of the main project to look into your changes and merge it into the main project, if that is something the owner allows and wants to have. The owner of the project sees the changes you make and do the necessary patches as he/she feels right.

And you are done. Congratulations! 🎉

Not only this, you are always welcome to go through the issues list of a project and try to solve the problem, first by commenting and letting everyone know whatever idea you have to  solve the issue and once you are approved of the idea, you make contributions as above. You can make a pull request and reference it to the issue that you solved.

But, But, But… Why don’t you make your own issues on a working project and add a label of Hacktoberfest for others to solve?  You will amazed by the participation. You are the admin of your project. People will create issues and pull requests and you have to review them and merge them to your main project. Try it out!

I  hope you find it useful and you enjoyed doing it.

Happy Learning!

by anuGupta at October 29, 2018 08:20 PM

Kuntal Majumder (hellozee)

Another year, nice one

So apparently one of the oldest community in probably the whole India, celebrated its 2nd anniversary on 28th after it was revived back on 2016. And you know what, this time it was a Capture The Flag event, something new tried by the new group of people who joined hands to not let this community go in hibernation again.

October 29, 2018 07:28 AM

Another year, nice one

So apparently one of the oldest community in probably the whole India, celebrated its 2nd anniversary on 28th after it was revived back on 2016. And you know what, this time it was a Capture The Flag event, something new tried by the new group of people who joined hands to not let this community go in hibernation again.

by hellozee at disroot.org (hellozee) at October 29, 2018 07:28 AM

October 22, 2018

Jagannathan Tiruvallur Eachambadi

New Templates in Dolphin

I was using kio-gdrive to access my Google drive account using Dolphin (file manager). Since these are mounted as a virtual filesystem, I was not able to save files in them directly from libreoffice or any external program. So I thought creating a new document from dolphin and then editing an empty document would be easier to use. But information was scant regarding how to put together. I knew we needed a template which is just an empty file but I didn’t know how to put all this together to make it show up in Dolphin’s “Create New” context menu.

Steps to get it working

The example assumes creation of an empty document (ODT file). First create a template file by saving an empty document in ~/Templates. This is just a suggested directory but any place would be fine. As of kf5, the path for user templates is ~/.local/share/templates which can be got from kf5-config --path templates.

So in ~/.local/share/templates, create an application file like so

# ~/.local/share/templates/writer.desktop
[Desktop Entry]
Version=1.0
Name=Writer Document
Terminal=false
Icon=libreoffice-writer
Type=Link
URL=$HOME/Templates/Untitled 1.odt

After this Dolphin should pick this up and show this entry in the “Create new” menu. Context menu One has to take care to give proper extension to the files when naming them though since Google Docs won’t like files without extension although they can opened from Drive into Docs.

by Jagannathan Tiruvallur Eachambadi (jagannathante@gmail.com) at October 22, 2018 10:46 PM

October 16, 2018

Abdul Raheem (ABD)

Guest Session by warthog9(IRC Nick) And Emacs Sessions by shakti kannan(mbuf IRC Nick).

Hello, world!

Feeling really good to be back on writing my blog again after about 1 – 1.5 month. As I said in my previous one(blog) that I got busy with my college work and missed pyconindia and the chance to meet people to whom I was interacting at dgplug and also obviously chance to meet the mentors as well 😔 and am still busy with it but I thought I should make up some time for this and learning something which is better for my future not just literally byhearting each and every answer and writing in the exams and nothing more important than that and one thing I was really missing was Jason Braganza’s comments on how to improve my blogs 🙂 , So basically I have gone some 3-4 logs of emacs in that I got to know about many commands in it, so I will just give a clear clarification about what commands I got know and I cannot type each and every commands here it will become very lengthy blog and I don’t want to make it a lengthy one I will leave a link to the dgplug logs there you can find all the 10 blogs related to emacs from date 16-Aug-2018, I got to know about some basic commands like to open emacs from terminal type emacs -Q, In emacs if you want to copy some thing type c-y(c stands for “Control key”) and to move to end of the line type c-e and to move to beggining of the line type c-a and to move one sentence forward type M-e and for backward type M-a(M stands for “Alt key”) and to move forward one paragraph type M-} and for backward type M-{ and to save a file c-x c-w and to save a file type c-x c-f these were some of the basics commands that I remember again do check the dgplug logs to know more commands.

Buffer commands:
The next thing I got to know is buffer commands and everything is a buffer in Emacs. You could even have a chat on IRC channel or composing an email or writing code everything is a buffer. I will mention some which I remembered and again do check the logs for more information on everything. If you want to switch to another buffer type c-x b and you can come back again to the scratch buffer using c-x b and you can display the buffer using c-x c-b and you can close all other windows using c-x 1 and you can rename the buffer using M-x and you can save a buffer to a file using c-x s and you move the cursor to the next window using c-x o and many other buffer commands.

Window commands:
The next thing is window commands and again I will mention some of them do check the logs for more info, If you split a window horizontally type c-x 2 and if you want to delete current window not the buffer or file type c-x o and you can enlarge the window using c-x ^ and if you want to scroll the text in the other window type c-m-v and many other window commands.

Frame commands:
The next is frame commands, If you have a large screen you can open multiple GNU Emacs frames. You can create one using c-x 5 2 and you can move the cursor between frames using c-x 5 o and if you want to find a file in the new frame type c-x 5 f these were some basic frame commands and I have to through the rest logs and you can go through all of them as I have mentioned above with a link to all of those logs from day 1.

In between of this emacs sessions there was the guest session by warthog9(IRC Nick) I don’t know his name but it was really an interesting one (Link to that session). He gave some amazing suggestions and said one story which was also amazing :). These are some of the suggestions given by him when Kushal Das asked him to give to his students.

His first suggestion was obviously to get some virtualization software running somewhere, KVM/Qemu is free if you are comfortable with Linux.
If you have a mac or windows or Linux VMware has good options(Full disclosure: He works for Vmware).
but it is something interesting to try setting up and playing with owncloud/Nextcloud (doesn’t really matters which), squeezebox or another music jukebox kinda server do up a window file sharing setup(samba specifically).
once you have samba working, figure out how to export the same directories via nfsv4.
Setup some modern website with nginx or apache and you would even run containers to get it all working, but that could be a bit advanced, but it would be a good learning opportunity.
Once you have the above things ready, go play with collected and grafana and collect some interesting statistics and graphs from your other VM’s and seeing pretty graphs about how your machines are doing is always helpful.
Happy learning 🙂

by abdulraheemme at October 16, 2018 07:35 PM

October 09, 2018

Mohit Bansal (philomath)

Internet Relay Chat!

This blog post will cover the basics of IRC (Internet Relay Chat) and who should use it. It's been a long time since I first used IRC and it was not very pleasing experience at first. And now, I use IRC as my primary mode of communication. I won't be suprised if you haven't heard of IRC yet and even if you heard of it, never tried it. I know what are you thinking right now, "IRC, stupid, eh!".

by Abstract Learner (noreply@blogger.com) at October 09, 2018 04:48 PM

September 14, 2018

Kuntal Majumder (hellozee)

A Year Passes by

ILUGD better known as India Linux Users Group - Delhi, a LUG, based in Delhi NCR. Bear in mind that it is “India” and not “Indian”, a lot of people get that incorrect, “Indian” means it is exclusively for Indians which we are obviously not, but our group is based in Delhi which comes in India so, hashtag_blah_blah.

September 14, 2018 04:13 PM

A Year Passes by

ILUGD better known as India Linux Users Group - Delhi, a LUG, based in Delhi NCR. Bear in mind that it is “India” and not “Indian”, a lot of people get that incorrect, “Indian” means it is exclusively for Indians which we are obviously not, but our group is based in Delhi which comes in India so, hashtag_blah_blah.

by hellozee at disroot.org (hellozee) at September 14, 2018 04:13 PM