Eclipse, EGit, GitHub and headaches…

I have been using Eclipse as my Python Development Environment ever since KDevelop folks decided to abandon Python platform (which is a pitty – I liked KDevelop featureset better than PyDev in Eclipse). Anyway I’ve got multiple projects on-the-go and going through the hassle of setting up SourceForge accounts for them is not something I look forward to (especially that those are small projects). So I decided to take the plunge and explore GitHub.

Now creating the account on GitHub was fairly straight-forward and I also created/imported my SSH key along the way. However making Eclipse to talk to it was one heck of a hassle. I started off following Lars Vogel’s tutorial (really you need to read all of it including GitHub section) however I couldn’t “push” back my changes to GitHub getting “Authentication error” messages. That wasn’t what I have expected so I read through GitHub tutorial, did the “manual” set up of git repo from command line (successfully), yet after removing and re-adding project to Eclipse from scratch yielded no positive results. That’s when I got to EGit Wiki and realized that since I’ve got “custom” key created for the development purposes I have to add it to the list of keys Eclipse is aware of. So, under “Window” > “Preferences” > “General” > “Network Connections” > “SSH2” I have added my key and… oh miracle! I’ve got the prompt for my password. From that point on it looks like GitHub is functioning for me, now I have to figure out how to properly use Git 🙂

Fedora 16 LXDE Xorg keyboard layout switching

I was trying to set up a laptop with Fedora 16 & LXDE to have alternative layouts and it turned out LXDE was pretty spartan when it comes to such things. So back to Xorg setup. On-the-fly change:

$ setxkbmap -option ” -option grp:switch,grp:alt_shift_toggle ‘us,ru(phonetic)

and a bit more permanent solution is (obviously) in /etc/sysconfig/keyboard :


With addition of LXDE applet via

“Right click on the panel → select Add / Remove Panel Items → Add → select Keyboard Layout Switcher and click Add

use the Up and Down buttons to move the plugin to the desired position.

You can now switch layouts by using the keyboard shortcut or by clicking on the xkb plugin”


Fight cloud with cloud (#OccupyCloud ?)

With the advance of clouds and aggressive invasion of “social services” like Facebook, MySpace, Google* etc. it looks like there is no space left for person’s private data (Naomi Klein’s “No Space” comes to mind). As soon as information is fed to Facebook, Google or other entity it stops being the property of that person and becomes property of the company. Another thing that is happening is annihilation of local services, local communities and removal of local knowledge (it sounds that in Egypt’s reversing the trend helped the revolution). At present to know your community which is right at your doorstep you have to go to Facebook, Twitter, MySpace, etc. and explore it there. It’s not hard to imagine disappearance of Facebook* services one day (either entirely – the South-Park way or partially – the Facebook way ). That could have some very measurable negative impact on community hooked on such services. The scenario can be reapplied multiple times for different “cloud providers” and for different “communities”. In other words people are in a great danger of losing not only their personal data but also their collective/community data. Imagine losing all books of Dickens overnight (or books of Orwell) or any other cultural heritage that doesn’t belong to a single individual but entire nations or even entire planet.

There’s a solution. The most antagonized creation of IT industry – BitTorrent. Content publishers of all kinds (MPAA, RIAA, BlahAA etc.) are all after BitTorrent users, ISP’s are after BitTorrent throttling it down to a trickle, Software manufacturers for the most part are scared out of their minds and media is demonizing BitTorrent users. Above are all the entities that want to own person‘s data but don’t want to give back much: Blu-Ray wants to know all about person’s movies and lock him out if it doesn’t like something, ISP’s want to know what person is doing online and sell him/her out to the highest bidder, software producers want to know consumer’s every move and turn it into a commodity or force-feed him advertisements – the common line is to strip consumer of his privacy, his rights and commoditize him/her. As per Google:

As Google says in their own words, to their investors:

Who are our customers? Our customers are over one million advertisers, from small businesses targeting local customers to many of the world’s largest global enterprises, who use Google AdWords to reach millions of users around the world.

And as Mathew Ingram sums up in his article:

As the saying goes: If you’re not paying for it, then you’re the product being sold.

Linking all of the above and brilliant presentation by Mark Pesce some things come to mind:

Peer2Peer distribution + Localization + IPv6 = Freedom

Above needs some explanation and requires some technical skill to grok. Equation is actually much more complicated than above and here’s what it translates to (or born from?).

Following Mark Pesce‘s logic the more popular is resource the more available it is. Note also that resource does not exist in any single location, instead it exists on dozens of computers all at the same time. What such distribution creates is a bonus for any sort of freedom movement (WikiLeaks anyone?) as it removes single point of entry (ISP, Domain Registrar, government, etc.) that can be sued, or scared into droping hosting of such content. Just like what Mark is arguing about (and like everybody knew for a while now) once content is published online – it starts the life of it’s own and can’t be contained. Only in Peer2Peer scenario survival rate is even higher.

Private Peer-to-Peer networking seems to be developing too: N2N, RetroShare, etc. Which brings us one step closer to implementation.

Back to our equation: localization is needed to retain community information within community (because of it’s high appreciation and value in this context) while making it available to everybody outside at the speeds proportionate to demands. In other words if your town has a pile of resources it wants to share primarily locally and if anybody is interested outside of community as well – the law of latencies helps here. Currently ISP’s are the gate-keepers so if there’s no ISP in town – no data sharing for you. In other words tech-savvy communities are hostages of ISP’s. Alternative is a local mesh network that doesn’t need ISP. All the “spare parts” are readily available – WiFi-equipped devices are on every corner so turning them all into access points could create a local “roaming zone”. With Peer2Peer – based content distribution (think HTTP-over-BitTorrent) community can host it’s own sites/forums/mailing lists/you name it without ever needing provider. It’s even possible to use different carriers – HTTP-over-SMS, old school dial-ups, even pulling ethernet cable across the driveway to your neighbour’s house, Bluetooth, Infra-red, etc.

Localization is good but inter-community communications are still needed. Now is time to invoke FidoNet – asynchronous distributed network of semi-autonomous nodes. Brilliant idea that was both right for it’s time and too advanced for it’s time. Taking a close look at node organization it is exactly like described above except it required phone lines. That is where IPv6 comes into play. FidoNet had node list and network addresses assigned from central authority, but essentially addresses were unlimited, just like IPv6. If we take IPv6 as a transport layer – we’ve almost resolved problem of compatible addresses across the globe – every single machine can have unique address and routing can be done based on that. Now idea doesn’t seem so crazy and distant, does it?

Couple more details to make it more attractive and add more meat to it: since we’ve got mesh networking and IPv6 protocol, and BitTorrent-like distribution of content we have freed ourselves from the hard dependency on specific physical media for transport. Whether it’s a phone line from my house to neighbour’s or shared WiFi or P2P Radio Antennas, or Ham Radio or pidgin mail – when locally somebody makes a request for pageX that is not part of local community’s infrastructure, it’s download it scheduled throughout community network of nodes and with the first possibility of download being downloaded to computer of whoever requested it. Now pageX is local. Next person asking for pageX will get it locally! More popular page – more people locally will store it so as per Mark Pesce – download speed goes up. A-ha! With the clever mechanisms of caching and expiry it’s not so hard to devise a fairly efficient method of keeping things that are of interest to population readily available (and not controlled by anybody).

Now next aspect of this theme is permanent local storage. While in above scenario people keep on downloading and storing locally other people’s stuff it’s important for that “other people’s stuff” to exist. All that needs to be done is having “local storage” defined on all the nodes, where content of local storage, just like with BitTorrent and other Peer2Peer networks is shared freely upon request with the rest of the world but permanently resides on local computer (unlike cached content that person requested today or yesterday which can expire tomorrow). In which case user’s machine becomes the “host” for the content, but if content becomes popular – burden of serving it is shifted to the… wait for it… wait for it… cloud!

Above resolves the problem of content ownership and content’s persistance. If I like what I downloaded – I move it to my local storage making it something that I host permanently, now there are 2 hosts hosting the same content (with the same signature) on Peer2Peer network. It looks like having 3 different types of storage should resolve majority of usecases: private store, public store and cache store. Private holds data you do *not* intend to share with anybody (personal documents, pictures, etc.), public store holds [personal] information intended for sharing – movies, sites, files, music, documents, etc., and cache would store only transient data – data that person downloaded for whatever reason and is keeping for the time being to speed up subsequent access (and this part is the only one controlled by automatic measures of expiry etc.).

Above may sound far-fetched, but something is already happening in this domain – FreedomBox Foundation have just started it’s operations but if you look at the goals – they are already thinking in that direction:

We’re building software for smart devices whose engineered purpose is to work together to facilitate free communication among people, safely and securely, beyond the ambition of the strongest power to penetrate, they can make freedom of thought and information a permanent, ineradicable feature of the net that holds our souls.

Currently it looks like they are at the point where they target only communication itself, not data preservation, but why wouldn’t it be a next step?

To get around ISP getting overly sneaky and curious – layer of Tor could be implemented between inter-community nodes or even throughout the community.

Imagine applications for sharing information. Assume person A lives in community X. Now, A goes on a trip to community Y, of course he brings his laptop (?) with him. While at the bus station everybody in close proximity get to “know” what A knows and share content with him (if they choose to) – anonymously and at great speeds (and without paying fees to the carrier).

Last piece that is missing in all of the above is out-of-the-box hardware/software platform that would support that. FreedomBox doesn’t seem to have goals that reach this far, and we won’t witness any great movements from Google, Microsoft, Apple or any other existing commercial entity that is not deeply rooted in OpenSource world. All of the proprietary vendors are gearing their operations towards other corporate/commercial entities rather than average person (as it was mentioned and proven earlier). It is not in their interest – without our data they have nothing to sell.


DRM and what does it really mean to you

  • taking French course at Athabasca University and having to deal with “online-only” materials, hard-to-rip discs. Now take a look at prices too.
  • reading books on eBook with Adobe digital editions

Porn again

With the onslaught of internet opinions/articles/news sources it gets harder to distinguish those genuine and unbiased or technically accurate from those that are not.

Considering that Internet is nothing else but an enormous international database of naked bottoms. (as per Steve from Coupling) I felt like following Jeff’s from that very same episode:

Jeff: Well, it’s kind of hard to tell isn’t it ‘cos you tend to fast forward if anyone’s dressed. Sometimes I forget and do that with proper films. I can get through a lot of movies in an evening.

So reading articles I find myself skipping right to the comments section.


Do no evil, see no evil, hear no evil

There was a lot of co incidents lately: me reading Slavoj Žižek, BBC quoting Marx, Adbusters quoting Marx statistics on crazy people percentile in a capitalist society going up over time and confirmed by WHO.

following Slashdot’s article ( and reading through FreeSoftwareMagazine ( ) which really comes as no surprise after “Google joins California Do-Not-Track opposition lobby” ( ) and that is all summed up in Adbusters (more precisely in Jennifer Egan’s novel “Look at me”):

…The narrative of industrial America began with rationalization of objects through standardization, abstraction and mass production, and has concluded with the rationalization of human beings through marketing, public relations, image consulting and spin…

sor while Corporations like google argue for transparency what we have in our back-yard is “New Wikileaks Docs Show Ex-Minister Bernier Offered To Leak Copyright Bill to U.S.” ( ) and “Libyan papers ‘show CIA and MI6 links'” (

Facebook cleans up compromised accounts = people losing data/money

Explore facebook as means of avoiding reality and global issues making “self”/ego the focus of everything.

coincidence? South parks episode about loss of internet.

narcissistic tendency

How to spot a narcisist

Narcissists thrive in big, anonymous cities, entertainment-related fields (think reality TV), and leadership situations where they can dazzle and dominate others without having to cooperate or suffer the consequences of a bad reputation.

Does growth of Narcisism  reflect tendency to outsource? Narcisism is linked to leadership skills – need to dazzle and depend on others for positive feedback, when entire society is geared towards outsourcing, everybody is a “manager of X” – since all day long you try to concentrate attention of others on yourself, wouldn’t you develop Narcisistic traits?

It appears that narcissists seek out people who maintain their high positive self-image, at the same time intentionally avoiding and putting down people who may give them a harsh dose of realism.

and another one:

“In the long run it becomes difficult because others won’t applaud them, so they always have to search for new acquaintances from whom they get the next fix.” This could explain why narcissists so frequently change their social contexts and maintain only weak ties to others.

and another one:

The whiplash combination of parental coldness and excessive parental admiration is more strongly related to maladaptive narcissism than is either attitude alone.

wtf is twittergate???

Fedora MD RAID check WTF

Today, out of the blue my box decided to do the RAID check on my MD devices. I can’t remember seeing it before while I was running Gentoo, but now with Fedora things feel somewhat different. Fedora does automate quite a few things out of the box – the things I have omitted in my previous Gentoo experience.

What have caught my attention was both high load on machine (out of the blue) and:

# cat /proc/mdstat
 Personalities : [raid1] [raid0]
 md126 : active raid1 sdc6[1] sda6[0]
 308793280 blocks [2/2] [UU]
 [========>............] check = 40.7% (125758464/308793280) finish=43.7min speed=69702K/sec

which lead me to a nearby Google outlet where I immediately borrowed some wisdom on a somewhat related subject: disks and S.M.A.R.T.:

# smartctl --health /dev/sdc
smartctl 5.41 2011-06-09 r3365 [x86_64-linux-] (local build)
Copyright (C) 2002-11 by Bruce Allen,

SMART overall-health self-assessment test result: PASSED
Please note the following marginal Attributes:
190 Airflow_Temperature_Cel 0x0022   053   040   045    Old_age   Always   In_the_past 47 (2 51 47 25)

so after enjoying rather interesting feature (smartctl that is) I have also checked around and found out that in some configurations it’s an “automatic behavior”. Which lead me to further discoveries this time from Ubuntu-land and ended up in glorious discovery of “magic device” in my posession:

# cat /etc/cron.d/raid-check
# Run system wide raid-check once a week on Sunday at 1am by default
0 1 * * Sun root /usr/sbin/raid-check

…back to sorting out the rest of my Gentoo -> Fedora migration…

My saga of exodus to Fedora

After successfully installing Fedora Core 15 on my home box I am moving all my stuff from Gentoo to Fedora. I’m still questioning my move, but lately I have less and less time to dedicate to proper maintenance of Gentoo, not to mention that at work I run RedHat servers so I’m much more familiar with RedHat insides than Gentoo’s at the moment (wasn’t like that a year or two ago). I still think Gentoo is a brilliant distro and taught me a lot about inner workings of things; I didn’t want to go Ubuntu (I really does piss me off how it gets in my way all the time) so Fedora was more like a happy medium between Gentoo and Ubuntu and it would provide some learning grounds for my office use of RedHat. Prior to my home move I moved my office machine with no problems whatsoever, but then my office setup was not as elaborate as the one at home. I have already tried migrating to Fedora at home once and failed thanks to LiveCD’s. This time I’ve got a system that works and doesn’t show too many signs of instability. Another thing is to keep Gentoo around in a VM just in case I have to fall back to it for some apps/functions.

Plan of actions:

  • add/migrate all filesystems from Gentoo to F15
  • make F15 boot from a sandwich I ran in Gentoo: RAW->MD-(raid1)->LVM
  • enable SELinux
  • depending on success: migrate Gentoo into a VM

Couple of interesting hurdles/glitches:

  • Lightspark is way more unstable in fedora vs Gentoo
  • my NVidia sound keep throwing some odd messages in F15 (not in Gentoo)
  • I have to deal with sytemd startup (quite a bit of learning here)
    • MD RAID and LVM issues
  • I have to deal with SELinux
    • NFS issues
    • ReiserFS issues

So here’s the story so far…

Installing flash I ended up with 32bit Adobe crap bolted via nsplugin-wrapper (lightspark turned out to be quite unstable in Fedora). But at least it works…

To know where my problems start you’d have to know where am I coming from. So on my Gentoo box I’ve been using ReiserFS for quite some time now due to effectiveness of it on systems with lots of small files. I also have a NAS where most of my stuff lives (or is synced to) mounted over NFS.

After installing F15 on a fresh new partition[s] and making sure install is functional it was the time for migration of the systems. Couple of problems I ran into on the first go:

  • SELinux wouldn’t let me use ReiserFS partitions as they don’t support Extended Attributes
  • As soon as I plug-in Gentoo entries into /etc/fstab all goes to hell and systemd rebels against me

First one was easy – SElinux operates in “permissive” mode now and I slowly collect it’s reports and combine fixes either into a policy or fix contexts etc. on-disk. Very tedious task. Couple of really useful tips:

from Dan Walsh:

If he had the te files from the previous run, he could use audit2allow to add rules to the te file.

# audit2allow >> myexim.te << /var/log/audit/audit.log,

I haven’t realized that every time I ran “audit2allow -M” it was leaving .te file for me in root’s home directory. So instead of generating gazillion tiny policies I ended up buffing up one generated by audit2allow further and further. Cuts down on clutter and keeps things neat.

Fedora SELinux pages have quite a bit of info too and were quite helpful in understanding what I’m dealing with so far.

My NFS shares from NAS mounted ok as root, but when I got to user access I discovered that I can’t view anything on NAS. Which naturally pissed me off. Then it dawned on me that NAS has groups out-of-sync with my workstation so I have to create nas-like group on my machine and get myself into it. So after simply adding;


to /etc/group and re-logging in things started to look bit better.

Next punch was delivered by systemd – it clearly gets ahead of itself and tries to mount MD RAID/LVM volumes prior to their initialization. So my first attempt was to get mounts into systemd-like form by crafting things like that:

$ cat /etc/systemd/system/mnt-gentoo.mount
#  This file is part of systemd.
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU General Public License as published by
#  the Free Software Foundation; either version 2 of the License, or
#  (at your option) any later version.

Description=Gentoo root

# Options=bind

which looks and feels as an abomination to me. So it took me some time and effort and my last (but not least hackish) attempt looks like this: we initialize all MD/LVM devices from boot string via dracut – truly a strike of evil genius:

title Fedora (
        root (hd0,0)
        kernel /vmlinuz- ro root=/dev/mapper/vg_gamer-rootfs rd_luks=0 LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYTABLE=us nodmraid nouveau.modeset=0 rdblacklist=nouveau
        initrd /initramfs-

If that doesn’t look evil – I don’t know what does…