Tag Archives: Tech

technobabble

Alberta, Politics & DevOPS Pt. 1

DevOps principles for politics

Currently population can easily be divided into top 1%, bottom 99% and politicians. In DevOps we tear down the walls and break the silos. So we need to break the political silo with some rather radical (DevOps) methods. What good is a devops practitioner if he/she is not participating in daily maintenance and building systems that would cause less grief long term. In politics however we have political class doing one thing only – politics and disconnected from the actual people they represent (the longer they are in the office – the more disconnected they are with very rare exceptions). So what if we were to enforce the “right” behavior by allowing only a single term for politician with mandatory “cool-off” next term where they go back to their original job (teacher, doctor, etc.). And allow them to run next term after that. That also implies electing people who are actually representative of demographics and not parachuted by the party into the district.

So what would we achieve by doing that? Politician that represents people would have better idea what ails his constituency, be less inclined to side with 1% (remember – he goes back to his old job after his term so he needs to make his own life better and by proxy others’). Which is very much akin to OpenSource approach – “scratch your itch” (and help somebody)

Eclipse, EGit, GitHub and headaches…

I have been using Eclipse as my Python Development Environment ever since KDevelop folks decided to abandon Python platform (which is a pitty – I liked KDevelop featureset better than PyDev in Eclipse). Anyway I’ve got multiple projects on-the-go and going through the hassle of setting up SourceForge accounts for them is not something I look forward to (especially that those are small projects). So I decided to take the plunge and explore GitHub.

Now creating the account on GitHub was fairly straight-forward and I also created/imported my SSH key along the way. However making Eclipse to talk to it was one heck of a hassle. I started off following Lars Vogel’s tutorial (really you need to read all of it including GitHub section) however I couldn’t “push” back my changes to GitHub getting “Authentication error” messages. That wasn’t what I have expected so I read through GitHub tutorial, did the “manual” set up of git repo from command line (successfully), yet after removing and re-adding project to Eclipse from scratch yielded no positive results. That’s when I got to EGit Wiki and realized that since I’ve got “custom” key created for the development purposes I have to add it to the list of keys Eclipse is aware of. So, under “Window” > “Preferences” > “General” > “Network Connections” > “SSH2” I have added my key and… oh miracle! I’ve got the prompt for my password. From that point on it looks like GitHub is functioning for me, now I have to figure out how to properly use Git 🙂

Fedora 16 LXDE Xorg keyboard layout switching

I was trying to set up a laptop with Fedora 16 & LXDE to have alternative layouts and it turned out LXDE was pretty spartan when it comes to such things. So back to Xorg setup. On-the-fly change:

$ setxkbmap -option ” -option grp:switch,grp:alt_shift_toggle ‘us,ru(phonetic)

and a bit more permanent solution is (obviously) in /etc/sysconfig/keyboard :

KEYTABLE=”us”
MODEL=”pc105+inet”
LAYOUT=”us,ru(phonetic)”
OPTIONS=”grp:switch,grp:alt_shift_toggle”

With addition of LXDE applet via

“Right click on the panel → select Add / Remove Panel Items → Add → select Keyboard Layout Switcher and click Add

use the Up and Down buttons to move the plugin to the desired position.

You can now switch layouts by using the keyboard shortcut or by clicking on the xkb plugin”

 

Fight cloud with cloud (#OccupyCloud ?)

With the advance of clouds and aggressive invasion of “social services” like Facebook, MySpace, Google* etc. it looks like there is no space left for person’s private data (Naomi Klein’s “No Space” comes to mind). As soon as information is fed to Facebook, Google or other entity it stops being the property of that person and becomes property of the company. Another thing that is happening is annihilation of local services, local communities and removal of local knowledge (it sounds that in Egypt’s reversing the trend helped the revolution). At present to know your community which is right at your doorstep you have to go to Facebook, Twitter, MySpace, etc. and explore it there. It’s not hard to imagine disappearance of Facebook* services one day (either entirely – the South-Park way or partially – the Facebook way ). That could have some very measurable negative impact on community hooked on such services. The scenario can be reapplied multiple times for different “cloud providers” and for different “communities”. In other words people are in a great danger of losing not only their personal data but also their collective/community data. Imagine losing all books of Dickens overnight (or books of Orwell) or any other cultural heritage that doesn’t belong to a single individual but entire nations or even entire planet.

There’s a solution. The most antagonized creation of IT industry – BitTorrent. Content publishers of all kinds (MPAA, RIAA, BlahAA etc.) are all after BitTorrent users, ISP’s are after BitTorrent throttling it down to a trickle, Software manufacturers for the most part are scared out of their minds and media is demonizing BitTorrent users. Above are all the entities that want to own person‘s data but don’t want to give back much: Blu-Ray wants to know all about person’s movies and lock him out if it doesn’t like something, ISP’s want to know what person is doing online and sell him/her out to the highest bidder, software producers want to know consumer’s every move and turn it into a commodity or force-feed him advertisements – the common line is to strip consumer of his privacy, his rights and commoditize him/her. As per Google:

As Google says in their own words, to their investors:

Who are our customers? Our customers are over one million advertisers, from small businesses targeting local customers to many of the world’s largest global enterprises, who use Google AdWords to reach millions of users around the world.

And as Mathew Ingram sums up in his article:

As the saying goes: If you’re not paying for it, then you’re the product being sold.

Linking all of the above and brilliant presentation by Mark Pesce some things come to mind:

Peer2Peer distribution + Localization + IPv6 = Freedom

Above needs some explanation and requires some technical skill to grok. Equation is actually much more complicated than above and here’s what it translates to (or born from?).

Following Mark Pesce‘s logic the more popular is resource the more available it is. Note also that resource does not exist in any single location, instead it exists on dozens of computers all at the same time. What such distribution creates is a bonus for any sort of freedom movement (WikiLeaks anyone?) as it removes single point of entry (ISP, Domain Registrar, government, etc.) that can be sued, or scared into droping hosting of such content. Just like what Mark is arguing about (and like everybody knew for a while now) once content is published online – it starts the life of it’s own and can’t be contained. Only in Peer2Peer scenario survival rate is even higher.

Private Peer-to-Peer networking seems to be developing too: N2N, RetroShare, etc. Which brings us one step closer to implementation.

Back to our equation: localization is needed to retain community information within community (because of it’s high appreciation and value in this context) while making it available to everybody outside at the speeds proportionate to demands. In other words if your town has a pile of resources it wants to share primarily locally and if anybody is interested outside of community as well – the law of latencies helps here. Currently ISP’s are the gate-keepers so if there’s no ISP in town – no data sharing for you. In other words tech-savvy communities are hostages of ISP’s. Alternative is a local mesh network that doesn’t need ISP. All the “spare parts” are readily available – WiFi-equipped devices are on every corner so turning them all into access points could create a local “roaming zone”. With Peer2Peer – based content distribution (think HTTP-over-BitTorrent) community can host it’s own sites/forums/mailing lists/you name it without ever needing provider. It’s even possible to use different carriers – HTTP-over-SMS, old school dial-ups, even pulling ethernet cable across the driveway to your neighbour’s house, Bluetooth, Infra-red, etc.

Localization is good but inter-community communications are still needed. Now is time to invoke FidoNet – asynchronous distributed network of semi-autonomous nodes. Brilliant idea that was both right for it’s time and too advanced for it’s time. Taking a close look at node organization it is exactly like described above except it required phone lines. That is where IPv6 comes into play. FidoNet had node list and network addresses assigned from central authority, but essentially addresses were unlimited, just like IPv6. If we take IPv6 as a transport layer – we’ve almost resolved problem of compatible addresses across the globe – every single machine can have unique address and routing can be done based on that. Now idea doesn’t seem so crazy and distant, does it?

Couple more details to make it more attractive and add more meat to it: since we’ve got mesh networking and IPv6 protocol, and BitTorrent-like distribution of content we have freed ourselves from the hard dependency on specific physical media for transport. Whether it’s a phone line from my house to neighbour’s or shared WiFi or P2P Radio Antennas, or Ham Radio or pidgin mail – when locally somebody makes a request for pageX that is not part of local community’s infrastructure, it’s download it scheduled throughout community network of nodes and with the first possibility of download being downloaded to computer of whoever requested it. Now pageX is local. Next person asking for pageX will get it locally! More popular page – more people locally will store it so as per Mark Pesce – download speed goes up. A-ha! With the clever mechanisms of caching and expiry it’s not so hard to devise a fairly efficient method of keeping things that are of interest to population readily available (and not controlled by anybody).

Now next aspect of this theme is permanent local storage. While in above scenario people keep on downloading and storing locally other people’s stuff it’s important for that “other people’s stuff” to exist. All that needs to be done is having “local storage” defined on all the nodes, where content of local storage, just like with BitTorrent and other Peer2Peer networks is shared freely upon request with the rest of the world but permanently resides on local computer (unlike cached content that person requested today or yesterday which can expire tomorrow). In which case user’s machine becomes the “host” for the content, but if content becomes popular – burden of serving it is shifted to the… wait for it… wait for it… cloud!

Above resolves the problem of content ownership and content’s persistance. If I like what I downloaded – I move it to my local storage making it something that I host permanently, now there are 2 hosts hosting the same content (with the same signature) on Peer2Peer network. It looks like having 3 different types of storage should resolve majority of usecases: private store, public store and cache store. Private holds data you do *not* intend to share with anybody (personal documents, pictures, etc.), public store holds [personal] information intended for sharing – movies, sites, files, music, documents, etc., and cache would store only transient data – data that person downloaded for whatever reason and is keeping for the time being to speed up subsequent access (and this part is the only one controlled by automatic measures of expiry etc.).

Above may sound far-fetched, but something is already happening in this domain – FreedomBox Foundation have just started it’s operations but if you look at the goals – they are already thinking in that direction:

We’re building software for smart devices whose engineered purpose is to work together to facilitate free communication among people, safely and securely, beyond the ambition of the strongest power to penetrate, they can make freedom of thought and information a permanent, ineradicable feature of the net that holds our souls.

Currently it looks like they are at the point where they target only communication itself, not data preservation, but why wouldn’t it be a next step?

To get around ISP getting overly sneaky and curious – layer of Tor could be implemented between inter-community nodes or even throughout the community.

Imagine applications for sharing information. Assume person A lives in community X. Now, A goes on a trip to community Y, of course he brings his laptop (?) with him. While at the bus station everybody in close proximity get to “know” what A knows and share content with him (if they choose to) – anonymously and at great speeds (and without paying fees to the carrier).

Last piece that is missing in all of the above is out-of-the-box hardware/software platform that would support that. FreedomBox doesn’t seem to have goals that reach this far, and we won’t witness any great movements from Google, Microsoft, Apple or any other existing commercial entity that is not deeply rooted in OpenSource world. All of the proprietary vendors are gearing their operations towards other corporate/commercial entities rather than average person (as it was mentioned and proven earlier). It is not in their interest – without our data they have nothing to sell.