Showing posts with label Open-source. Show all posts
Showing posts with label Open-source. Show all posts

Saturday, May 20, 2023

Disaggregation Dilemma - Part 2...(from static land-use color codes to something like OSM features indexing)

Information density at lowest levels of disagregation

Let us continue the discussion on information loss and data aggregation by comparing the maps of different planning levels that I showed in the first part of the blog, with their corresponding scales on OpenStreetMap and google satellite imagery. 

Instead of starting from the top (the city level), let's start from the bottom (the layout level) this time, and remember the words of Prastacos again- 

that computerisation basically allows us to maintain data at the lowest level of disaggregation and then readily aggregate it as the need arises.





There is of course no restriction on further zooming into the osm (OpenStreetMap) or the satellite imagery to study the area on greater detail. Similary, one can zoom out and reduce the scale to any extent to study larger areas. One does not have to stay restricted to certain categories of pre-defined map-scales (and needless to say, we also get our freedom from the scanned copies of water-soaked blue-prints that the government generously shares as "open data" and get a feel of the power of the REAL open data).

 

What is information loss due to aggregation ?

If we zoom into the area of the layout plan in the City level land use plan, then this is all the detail that we could possibly get -

 

Now compare the above level of detail (left) with what we saw in the case of the osm image (right) -

 


The above comparison is a simple visual representation of the amount of information loss that happens when spatial data is aggregated to higher levels using non-computerised cartographic methods.

There was simply no other way - in the absence of computers - than to prepare maps of different scales; covering different geographical extents in order to show different planning levels.

However, none of those limitations remain if one is working with digital spatial data and computers - there is no information loss at higher levels of aggregation of the same map. 

The trouble is that we continue to operate with the same methodology even when we have computers and geo-spatial software at out disposal.

Coding spatial information - learning from OSM features

When one understands the fundamental manner in the way in which computerisation allows aggregation and disaggregation of data, then one can also understand that the manner in which land and building uses were coded in earlier non-computerised map-making systems are no longer adequate or relevant.

Incidentally, a very powerful and effective alternative to older methods of land-use coding has already started appearing in the form of the "Map Features" of OpenStreetMap. 

Here is a description of the system from osm's wiki page -

OpenStreetMap represents physical features on the ground (e.g., roads or buildings) using tags attached to its basic data structures (its nodes, ways, and relations). Each tag describes a geographic attribute of the feature being shown by that specific node, way or relation.

Most features can be described using only a small number of tags, such as a path with a classification tag such as highway=footway, and perhaps also a name using name=*. But, since this is a worldwide, inclusive map, there can be many different feature types in OpenStreetMap, almost all of them described by tags.

The osm feature indexing system is extremely thorough and exhaustive and designed for use by the computer. Have a look at the difference between a typical color coded land use system and the osm map features system below.

This is how land uses are color coded in a typical land-use plan -


And this is how the osm map features indexing system looks like -



Just a casual glance is enough to see the power of this feature indexing system. It lists the various types of uses as key-value pairs, states what osm map elements they belong to, provide a clear description of each feature, the rendering and also photographs of typical examples.

The wealth of information that gets collected and maintained using such an indexing system is truly mind-boggling.

The analytical opportunities such systems open up can help us go toe-toe with the most complex urban problems that we face - and win.

 



Wednesday, May 17, 2023

Disaggregation dilemma - Part 1...(Of GIS based PDFs and Water Soaked Blue-prints)


What is wrong with our land-use plans ?

Well...nothing. Except, perhaps, the fact that they belong to an earlier epoch of technological development - a period when one necessarily had to prepare maps at different spatial scales in order to show greater or lesser detail; and use specific colours to aggregate the primary land uses at different scales - for example, yellow for residential use; red for commercial use (depending on prevalent cartographic rules).

One can also say that the technology of land-use maps, as they continue to be used to urban planning in India, corresponds to the period of map making prior to the advent of computerised cartography and geo-spatial analysis.

Using present technology, we do not need to switch between different maps prepared at different scales to study different degrees of spatial detail. Instead, we can simply zoom in and out within the same map. 

In most aspects of our lives we take this for granted - when we are booking an uber; or checking directions to a destination on google maps; or checking how far the swiggy delivery partner is at a particular point of time. 

In all such businesses, computerised geo-spatial analysis and decision-making is not just one of the components to be considered -  it is the most fundamental science and technology on which the business operations play out.

However, in a vital and complex activity such as urban planning, whose social and economic significance far exceeds that of profit maximisation in the gig economy, such technology is still a sort of a novelty which is far from having been internalised by the rank and file of the profession.  

In fact, the inadequacy of technical knowledge becomes amply clear precisely when one takes a look at the manner in which the planning profession attempts to internalise geo-spatial technologies. I discussed this in an earlier blog.

It is perhaps too difficult for our planning professionals and educators - too busy flaunting tech-terms and buzzwords - to come to terms with the simple fact that if your planning maps are made using GIS software then you do not need separate sets of maps at the levels of the city - i.e. the city level, the zone level and the layout level -- they are all part of the same geo-spatial database ! 

I am not even getting into the travesty of making such "GIS" maps available online in PDF format and then providing the attribute data in separate spreadsheet files and THEN announcing this pointless hotch-potch as Open-data ! A tighter slap on the face of the open-data movement was never landed. This is not open-data...this is an open disdain of the citizen.

 

From GIS based PDFs to Water soaked Blueprints

Let's have a look at such maps as they are available from the website of the Delhi Development Authority -

a) Here is the "big honcho" - the proposed land-use map of all Delhi. The highest level of the plan and the one with the smallest geographical scale and level of detail. Most of the time lay-persons attempting an analysis of the Delhi Master Plan remain pre-occupied with this level. Of course, it shows nothing more than the most general and most aggregated land-use distribution at the level of the city.









b) The next level of planning detail comes in the form of Zone level land-use maps. Shown below is the map of Zone-F in South Delhi. As per the zonal plan report, already in 2001, this zone had an area of 11958 hectares (i.e. 119.5 square kilometres) and a population of 12,78,000. That basically means that while it is just a part of the city of Delhi, it is still larger than many smaller sized cities of India (it is, in fact, larger than the smart city of Bhubaneswar in terms of population). 

The Zone too, therefore, is at a substantially high level of aggregation and can be compared to city level land use plans of one-million plus cities in India.

(NOTE - pay attention to the key-map in the attachment below and marvel at the cartographic genius of whoever prepared this "GIS based" pdf output)











c) And something peculiar happens when we go down to the level of the layout that contains the maximum geographical detail - the layout plans; which are more like a plan for a cluster of neighbourhood blocks. 

Here is what the plan of one of the layouts constituting Zone-F looks like...if you can make anything out that is. The keen observer would realise that this is actually a well drafted layout map (at least the key map is correct !), but we have suddenly descended from the world of GIS based PDF map outputs, to the world of water-soaked and worn-out archives of crumpled gateway sheets and blueprints. 



This is what gets uploaded as digital layout maps on the website of the premier urban planning agency of the capital of the country. 

There is therefore a complete dissonance between what digital and geo-spatial technologies truly are and how they are being utilised. 

In this matter the critics and activists of the civil-society and consultants of the private sector are often more technically incompetent than government planners. The government officials may not be familiar with the modern software but they know their cartography well enough (as illustrated by the water-soaked map), while civil society critics and private sector consultants (who often actually prepare the "GIS" outputs) are often poor in both technology and cartography.

 

In the next part we will see how computerised geo-spatial methods eliminate the problems of aggregation by allowing data to be maintained at as disaggregated a level as allowed by its granularity and aggregating the base-data as per requirement to whatever level necessary processing power of the computer.

In the words of planning expert and theorist Poulicos Prastacos -

"Data should be maintained at the lowest level of disaggregation and then readily aggregated as the need arises."

(Source - 'Integrating GIS technology in urban transportation planning and modeling' - P. Prastacos)

To be continued...



Saturday, May 6, 2023

The Data exists...right under our Mouses !

The capital irony

It is perhaps a capital irony of our times that precisely at the time when computers are more powerful and affordable than ever before and the access to powerful and previously expensive software provided by the  Linux + FOSS (Free and Open Source Software) movement, the general ability to use computers effectively to address the various problems faced by our cities is at an all time low.

I myself come from a background of primarily qualitative and participatory techniques in urban planning. I continue to have a natural fondness for such techniques, but have increasingly also discovered the power that effective use of computers bring to my work.

Contrary to the myth that the quantitative and qualitative worlds are poles apart (which leads to the further myth that professionals dealing with qualitative techniques cannot use computers for serious quantitative analysis), the two are in fact friends and allies of each other and help each other continuously.

Without waxing complex, think of a rather simple example. I would like to undertake participatory exercises in various slums in my city and I use all kinds of creative ideas to undertake the same inside those communities.

But alongside that, I could also prepare a GIS database of the slums in the city that gives me spatial and quantitative information on slums - such as their location, distance from each other, distance from other city facilities, size and density of the settlements, the population and occupational characteristics of the settlements etc.

This quantitative database can actually help me increase the effectiveness of my qualitative techniques by helping me to schedule meetings, use different techniques in slums of different sizes and shapes, check the probability of consensus-building (fewer meetings could build consensus faster in a smaller slum than in a larger and denser slum) etc.

Rather than focusing too much on whether to deploy quantitative or qualitative methods, it is better to focus on the problem that needs to be solved and deploy whatever methods that may be necessary.

Why computers ?

As long as I want to do participatory activities in a handful of slums, I may not need any support of computers at all. However, if I would like to undertake such activities in tens or hundreds or thousands of slums, then I begin to feel the need of the processing power of the computer.

It is as simple as that.

As more and more resources are made available to various urban development programs and schemes in India, their sizes, duration and scale of operation are all increasing. It is not difficult to understand that in a country the size of India, urban development projects would need to be undertaken at a scale where one can at least hope to make a meaningful difference. 

But of course, computers need instructions to follow - and they need data to work on.

The data exists...right under our mouses

Quite often, the impossibility of obtaining data is cited as one of the main barriers to effective use of computers in solving urban problems in India. I have written on this topic on multiple occasions. And I have stressed on earlier blogs that mere accumulation of digital data is not of much use if one does know how to use computers effectively to process it.

However, another capital irony of our times is that much of the data whose absence we so lament - does indeed exist...and sometimes right under our noses (or mouses).

Let me demonstrate.

This particular link will take you to the dashboard of the "GIS based Master Plan" sub-scheme of AMRUT. 

The very first component of this sub-scheme was geo-database creation and in the following screen-shot of the dashboard we can see its status -

 



If we look at the first three steps of the component, we can see that satellite data had been acquired and processed for about 450 cities. In the pie chart on administrative works is not self-explanatory, but it could mean that the National Remote Sensing Centre (NRSC) of the Indian Space Research Organisation (ISRO) may have handled the satellite data acquisition and processing for 240 cities and private companies may have done it for another 220 cities.

In any case, according to the official dashboard itself, we can conclude that processed satellite data exist for about 450 cities. As per the status chart, final GIS maps also seem to exist for 351 cities.


From if it exists...to where it exists

Finding evidence and clear arguments for the claim that something exists, is the first step in finding something. If I know for sure that something exists, then I need not succumb to the fallacy that it doesn't even exist. 

The task after that is to discover, where it exists, rather than wonder if it exists.

The same method can be applied to understand exactly what all data has been collected and processed under the myriad central and state government schemes that are going on in the country and have already been executed in the past.

Believe me, we will have more data than we would need for getting most of our tasks done.

The catch here is this...a person who does not have the imagination to discover the data, most likely would not have the imagination to use that data either.

But let's keep that blast for a later post ;)

Thursday, May 4, 2023

Indian Space Assets and Urban Planning

Smart in Space...Clueless on Land

As India's capabilities in the field of space technologies increase continuously, the gap between the data generated by our space based assets and the utilisation of the same for solving pressing social and economic problems is felt palpably...and painfully.

After all, the vision of our space program has always been to -

"Harness, sustain and augment space technology for national development, while pursuing space science research and planetary exploration."

And this is the level that we have already reached in this domain -

 

When it comes to space, we are not just good - we are among the best in the world and sometimes better.

The Indian satellite cartosat-3, launched in November 2019, is one of the most advanced high-resolution earth observation satellites in the world. 

With a resolution of 25 cm, Cartosat-3 surpasses the American World View-3 satellite owned by Maxar Technologies, which has a resolution of 31 cm.

And guess what is written in the Mission document of Cartosat-3 as the primary application of this third generation satellite -






You can access the document on this link.

One would expect that such high resolution products would be developed for the defense sector alone, but we have reached a level of technological development, where the mission document lists purely civilian sectors as the target users of Cartosat-3.

I seriously wonder whether the scores of professionals of India's urban development sector are aware that one of the most advanced products of one of the most advanced fields of human technology has been produced by their country for them.

Is it really so hard to imagine, what all becomes possible when you have high resolution satellite data available for the whole city and the region ?

From data collection to serious data analysis

With the development of advanced earth observation satellites we can finally take a pause from the gigantic spatial survey exercises that take up all the energy and creativity that could and should be dedicated to data analysis, forecasting, modelling etc. In any case, the fragmented and project specific data generated by these large urban development projects is used poorly and then abandoned and forgotten the moment the projects come to an end (refer to previous blogs for more details).

Consider the fact that Jaga Mission - which created one of the largest high-resolution geo-spatial database of urban slums in the world, had to deploy three separate drone survey companies, multiple quadcopter drones, teams of surveyors and the resources of over a hundred city governments to complete that survey in less than a year.

Given the logistics of covering almost 2000 slums in 109 small and medium cities spread across the length and breadth of the state of Odisha, which has an area of 155,000 square kilometres, it was a daunting task. Typically, in large urban development projects in India (and they are all large nowadays), so much energy and resources are devoted to conduct the surveys and create the datasets that serious analysis never gets a chance to take off. 

Such a daunting logistics would also imply, that while the urban reality is extremely dynamic, the probability of repeating such a spatial survey exercise at regular intervals would be very low.

From the point of view of temporal change, the ultra-high resolution data collected under Jaga Mission in 2018 may already be out of date. And there exist no plans of updating that data set.

Satellite data has no such problem with temporal resolution (the interval of time after which the same area of the surface of the earth would be captured again by the satellite in orbit).

With Cartosat-3 data, one can not only get a detailed picture of slum settlements (not just in Odisha, but through the country), but one can also analyse the spatial changes over time and also relate the location of slums to other features in the city (none of these are possible with Jaga Mission drone data, which only captured images of individual slum settlements.

Data without scientific knowledge is useless

There is a never-ending clamour for data among urban development professionals and researchers in India...a tendency I described as "data hunting-gathering" in a previous blog. However, it was not access to unlimited amounts of data that made the Indian space program what it is today - but scientific knowledge and the intelligent application of that knowledge.

Data is crucial - but only when one possesses the necessary scientific knowledge to effectively use that data. 

The trouble is that many (definitely not all) professionals demanding access to all kinds of digital data - i) would not be able to recognise that data if it were staring at them from their computer screens; ii) would not know what to do with the data even if by some miracle they figured out that it was indeed the data that they were looking for.

The usual excuse given by scholars and practitioners alike is that urban challenges are too complex. Of course, they are complex - but complexity of a problem is as much a function of the knowledge of the problem-solver as it is an intrinsic characteristic of the problem itself.

Finding an address too can be a very complex task - if someone doesn't know how to read a map. 

It is always nice to check whether a problem is really complex or if I am too dumb ! It may be very nice to discover that it is the latter, because then I know that the problem is solvable and I also get an opportunity to study and learn something new and useful.

Unless that knowledge gap is bridged, the brilliant developments in the field of Indian space technologies shall be unable to solve the relatively mundane challenges of urban planning and development.

And that would be a real shame.

Wednesday, January 25, 2023

(Smart when you create...dumb when you consume) <-- How to recognise unnecessary technology and some Jaga Mission Stories


The challenge of humanity, since the industrial revolution, has not been one of scarcity, but one of excess (and of the exploitation and inequality that naturally appear when that excess - which can provide for the whole world - ends up being controlled by the few). 

The trouble with technology is the same. How much is enough ? Where does one draw the line between the need and the want...the useful and the useless ?

Perhaps, there is a simple way to distinguish the technology that one needs from the technology that is unnecessary, superfluous and, in all probability, harmful.

Any technology which makes you smarter while you use it and keeps making you smarter and more creative the more you use it, is a healthy and useful technology for you. The technology that makes you dumb and dependent when you use it, is neither very useful nor very healthy in the long run, irrespective of the convenience that it brings.

Most of the time, we use (rather consume) technology that makes the creator of that technology - and those who control the creators - smarter and more powerful, while making us dependent and constantly distracted (which cannot but cause a steady dumbing down over time).

Technology ...Consumer and Creator

A casual glance at the way people use their computers - one of the most powerful tools of modern technology - can confirm the above statements.

Google-maps may make map-reading, navigation and orientation very easy, but it can also decrease our ability to use our sense of direction, powers of observation and of memory to remember and locate landmarks, judge distances etc.

Typically, when we use google-maps our eyes stay glued to the smart-phone screen (the effect is the same even if we are looking at the road while driving...the app tells us everything), but if we were to go old-style with paper maps we would have to constantly look up from the map to scan the surroundings and ensure that we are at the right spot.

Of course, I am absolutely not suggesting dumping google-maps and returning to paper maps. After all the whole purpose of technology is to make life more convenient for humans so that they don’t have to engage in ceaseless manual labour and mundane tasks and engage in more meaningful pursuits instead.

But if the consequence of such “convenience” is a steady process of dumbing down and engaging in such "meaningful pursuits" like spending hours on social media and the "struggles" of becoming an influencer, then it might actually be healthier and more meaningful to return to a life of heavy manual labour (if one can, that is).

The idea is not to turn ones back on technology (it simply cannot be done), but to be aware of the manner in which the technology is owned, controlled and provided to us so that we can be conscious of its effects on us.

GPS and the Kargil Lesson

Talking about the conveniences of positioning systems, that is exactly the lesson that the Indian army learnt the hard way during the Kargil war of 1999. India was denied access to the Global Positioning System (GPS) by USA exactly when she needed it most in the context of high-altitude mountain warfare. The Kargil war was also a trigger event that led to the development of NavIC (’Navigation with Indian Constellation’...the word ‘navik’ means sailor in Hindi), India’s alternative to the GPS. 

Therefore, the simple learning that the above case provides, is that it is alright to be a user of technology, as long as you also play a role in developing it (or at least understanding how it operates), but it can be downright dangerous if you forever remain merely a consumer of technology.

Still, no matter how dependent google-map may make you, it is still a very useful tool. What arguments could one possibly offer to justify the helpless addictions that are caused by the largely useless social-media platforms ?

I am sure the people who develop these platforms continue to sharpen and develop their skills in programming and problem solving, whereas the users continuously lose the ability to use the computer for the main task it was created to perform – to compute. On top of that, the more they use...the more they generate data for these very same companies.

One cannot wait for society to change to protect oneself from such devastating trends...one simply has to jump off this crazy train oneself.

Linux and Synaptic Connections

For me that jump was in April 2016, when I made the switch from Windows to Linux...and never looked back.

I realised that the users of open-source operating systems and software, inevitably, start transforming into developers with time.

Or as my dear friend Titusz Bugya, who introduced me to Linux and taught me pretty much everything that I know about the proper use of computers, once put it jokingly-

"Linux IS user-friendly !! It is a friend of the User...not of the idiot !"

As the Linux beginner starts overcoming the hesitation and fear of the terminal window and has the first conversations with the computer using the command line, s/he begins to hone that most essential and fundamental skill required for solving a problem, no matter how complex -- the ability to formulate a question.

The clearly formulated question leads to the precisely formulated command and that leads to the desired result.

Here, the Unix philosophy, which is also used in Linux, of using programs that do only one thing and do it very well becomes a great tool. It encourages you to break down complex tasks into component parts - which by themselves may not be as overwhelming - and then deploy the appropriate programs to tackle them one at a time. 

It is not just about approaching and successfully completing a task, but about developing a certain way of thinking and approaching a problem - or as the geo-political expert Andrei Martyanov put it in his brilliant book - "to develop complex synaptic connections which are applicable for everyday life."

Some more hands-on stuff...(OR) how Linux helped Jaga Mission in Odisha

In the previous post I had started discussing about the Linux command line and the incredible flexibility and power it provides to the user. 

Using the command line, and progressing (which happens quite naturally) towards scripting and programming, also halts and reverses the "Smart when you create - Dumb when you consume" process.

The simple fact is that we can't depend on an external IT specialist or a ready-made software for most of the problems that we face regularly in our work.

Only we know the specific problems that we face in our particular work environments - and they may pop up anytime. It is impossible to out-source all such problem situations to an external software consultant.

Similarly, there may be many tasks at work, which could be solved and/or automated through the command line or scripts (a series of commands written down in a file for execution). I have already showed some examples in the previous post

In this post let me show another example of a slightly higher order of complexity than the ones I showed earlier.

The implementation of Jaga Mission, the flagship slum improvement project of the Government of Odisha, where I worked as a consultant urban planner, involved the creation of a pretty huge geo-spatial database.

In the first phase of the project, about 2000 slums located in 109 cities and towns of the state were mapped using quadcopter drones. The very high resolution (2.5 cm) imagery was geo-referenced and digitized to create the necessary layers of geo-spatial data layers. 

The following were the major data layers that were prepared for each slum settlement -

a) the high-resolution drone image

b) layer showing the individual slum houses

c) layer showing the slum boundary

d) layer showing cadastral (land ownership/tenancy) data corresponding to the extent of the slum settlement.

e) layer showing the existing land-use of the slum settlement

This led to the creation of a pretty substantial geo-spatial database of about 10,000 map layers. In an earlier blog on operational parameters, I have explained how this database was crucial to fulfilling the goal of Jaga Mission of granting in-situ land rights to slum dwellers. 

The geo-spatial data was particularly useful when encountered with complex situations, such as slums located on certain specific categories of land, where granting in-situ land rights may not be possible. 

When this data was handed over to the Jaga Mission office by the technology consultants, the data-sets were organised in a manner which made quick retrieval and analysis difficult.

The individual layers were stored in a series of folders and sub-folders in a manner as shown in the diagram below -

 



In order to retrieve any layer of a particular type (say, the slum household map) of any slum, one would have to first open the folder of the respective district; then the folder of the respective ULB (Urban Local Body i.e. the city) ; then the folder of the respective slum and then the necessary layer(s).

The file names of the individual layers just mentioned the type (e.g. "hhinf" for the household layer; "rplot" for the revenue plot/cadastral layer etc), without giving any further information suggesting the name of the slum or city.

While this is absolutely fine for manually retrieving the separate layer files and operating on them on a Geographic Information System (GIS) software, this method of data management is incompatible with any attempt at programming, automating or quick retieval.

And when we are dealing with 30 districts; 109 ULBs; 2000 slums; and 10000 data layer, then quick and precise retrieval is essential. Any kind of programming or process automation could also be extremely useful. 

For example, it was decided by the Government that slums located on land belonging to the Railways may need to be re-located to alternate sites. The process could be done by filtering the cadastral layers based on land ownership by the Railways and then selecting the houses which intersect with those parcels from the slum-households layer. 

However, given the manner in which the files were named and organised, this process would have to be done manually on a slum by slum basis. In the absence of an army of GIS technicians (something that the Jaga Mission did not possess), the process was bound to revert to an even more laborious process of municipal staff and revenue officials physically visiting the slums and checking if they were located on railway land.

It was almost as if the elaborate digital database had never been created.

Titusz and I wished to rename and re-organise the data-files in a manner which would enable near instantaneous retrieval and processing. But, of course, even to rename the files (in order to enable scripting), we would need - you guessed it - scripting !....or else how to rename 10000 files stored in separate folders and sub-folders ??

So, we wrote a script which would loop over each of the 30 district folders and recursively go down each sub-folder until it reached the bottom-most level where the data files where stored. 

Every time the script would move down a folder level, it would store the name of the folder as a variable. Once it reached the level of the individual file it would rename it by adding the relevant stored variables as prefixes to the original name of the file. The resultant file name would therefore contain the name of the city, the name of the slum and the type of the data layer (there was no need to add the name of the district to the file name).

The following diagram shows the concept behind the script -

 

Once this process was completed, there was no need to store the files in separate folders and sub-folders. They could be kept in a single folder and files of any combination of city name, slum name and type could be retrieved instantly.

Not only did we have fun trying to create a script that would solve our problem by making use of the names of the very folders in which they were stored (which was precisely the problem that we were trying to solve !), but we also ended up creating a fresh system which drastically reduced the time taken for analysis and decision-making regarding all future tasks.

Effectively, we used the problem to solve the problem.

As a direct consequence, it reduced the burden of manual labour which would have fallen on the shoulders of municipal workers and also reduced the problems faced by slum dwellers due to incorrect decision-making in a process as challenging as relocation.


More on those stories in the forthcoming blogs...





 
 

 

 








Monday, January 16, 2023

With great power comes great...Idiocy !


One of the tragedies of computers in the present times is that they are almost always used for things which they are not supposed to be used for. 

As the name suggests, the primary task of the computer is to compute - to take over the boring, repetitive and labour consuming computing tasks of human beings so that the species can focus on more creative and meaningful pursuits (i.e. human pursuits). 

However, if I were to observe the use of the computer by development sector managers and professionals, it would appear sans doute that the electronic computer was created for the sake of making unnecessarily heavy and pointless graphics loaded power-point presentations. 

Indeed, if one counts the amount of hours (sometimes, all the hours) that young professionals spend adjusting images, animations and texts on their power-point presentations one wonders whether computers have increased office-based manual-labour by orders of magnitude instead of reducing it. 

Of course, a major cause of this, which I have already written about in a previous blog , is the continued dependence on proprietary software despite the steady growth of open-source software and operating systems. 

While proprietary software and operating systems treat computers users primarily as consumers of technology (distracting them with ever flashy and "user-friendly" software products, which are designed to make the consumer feel tech-savvy while simultaneously building a technological dependence akin to substance addiction), open-source software and operating systems encourage users to gradually transform into a free community of developers...liberating them from the addiction of any specific product created by a company (for its own profits of course) and enabling them to create their own products which are suited to their needs.


The super-computers in our pockets

In his brilliant talk titled "You should learn to Program", computer scientist Christian Genco said something, which was quite eye-opening and embarrassing at the same time. 


He showed a photo of the Apollo Guidance computer that was created by NASA engineers to do the complex calculations necessary for the Apollo-11 moon-landing mission. He then flipped out a smart phone from his pocket and said, 

"Your cell phone in your pocket right now, has the computing power to do the calculations of a million Apollo-11 missions simultaneoulsy ! NASA scientists in 1961 would have fallen to their knees and worshipped you for having this kind of technology !"


The Apollo Guidance Computer 


Christian then wrapped up the irony by showing a clip of a little video game and said,

"And what are you using it to do ??"


Here we are, faced with a zillion challenges that are hammering our clueless city governments from all sides....the challenge of affordable land; of urban poverty; of the vulnerability of the informal sector; of climate; of traffic; of housing...(the list goes on and on)...and we use one of the most powerful tools ever created by human beings to play video games, spend hours on social media, make colorful powerpoint presentations etc.

Enter the Command Line...may the Force be with You

Many years ago, when I was in high school, it was screens like the one shown below that made me run away from computers.


 

Little did I know that many years later this grim, dark screen - known as the terminal window - would turn into my biggest ally. I shall tell the story of my introduction to Linux in another blog. But for now, let us talk a bit about this dark screen. The terminal window is the one in which you can type in commands to your computer...when the computer responds, i.e. executes your command, you begin to sense both the true power of the machine and of yourself.

Of course, in the beginning the advantage is not very evident and I wondered why I should not go click-click with the mouse on the graphical user interface (GUI). After all, that is the most familiar and convenient way of interacting with computers running Windows on them. 

But very soon the things you can do with the command line starts overtaking all that you can do with your mouse...and then, suddenly, it goes totally beyond the reach of the mouse as if a spaceship from "Star Trek" just jumped into warp speed and disappeared from sight.

Let's say, I want to create a folder called "data" using the command line. I would type the following at the command prompt and press enter -

mkdir data

On the terminal it would look like this -

 


 

But I can easily do the same by right clicking on the windows file manager and then selecting the option for creating a new folder right ? 

But let us now try to make five folders with names data-1; data-2; data-3 etc. Now it begins to get slightly inconvenient to use the mouse.

But on the command line you would just have to type the following line and press enter -

 


And it's done.

But now....let's make 500 folders !

On the command all that you have to type is the tiny command-


That's it ! When you open your file manager, you see this waiting for you -



You have activated warp drive and left the mouse somewhere far behind in the universe. 

Now, imagine that these folders have been filling in over a long time with all kinds of files (word files; excel files; image files; pdf documents etc) concerning your work.

You would create a new folder called data-pdf and copy just the pdf files into it. It may be that all the 500 folders could contain pdf files or only some of them...you cannot be sure. What would be a mouse based way to do this ? Open each folder, select the pdf files and then copy them into the new folder. 

Or you could type the following two commands -

 


The first command -- mkdir data-pdf -- makes a new folder with the name data-pdf.

The second command -- cp  -r */*.pdf  data-pdf -- uses a command called cp (copy) to recursively (i.e. goes inside each and every folder) check for files ending with a pdf extension and then copy just those files into the newly created folder called data-pdf.

 

Well, well ! Now that is something isn't it ? We asked the computer to perform quite a complex search-n-retrieval task and it did it...in much much shorter time than the blink of an eye. 

And we did this with just two lines of ultra-simple commands.

Imagine what all we could do with a series of such commands.


A series of commands tied together....well, that's a program !!

If you could do so much with these mighty single-line commands (and they are mighty as we shall see)....what all can you do with a bunch of them tied together !!

That's what we will explore here.


(to be continued...)















 

 

 

 












Tuesday, August 16, 2022

Reviving a dormant blog...And a few words on Lefebvre, Cartesian geometry and Open-source GIS

The Lefebvrian Paradox


In his famous and influential book ‘The Production of Space’, Henri Lefebvre had written that ‘social space, and especially urban space, emerged in all its diversity – and with a structure far more reminiscent of flaky mille-feuille pastry than of the homogenous and isotropic space of classical (Euclidian/Cartesian) mathematics.’ 

Ironically, in the decades following the publication of Lefebvre’s book, it is precisely the dramatic developments in the application of ‘Eucliain/Cartesian’ mathematics to geographical analysis – namely through the development of various Geographic Information Systems (GIS) software – that helped unravel the innumerable layers of the urban mille-feuille. However, such technologies were proprietary, expensive and largely out of the reach of community groups. For the power of computerised geographical analysis to address the complexities of social space, it had to be accessible by the community and become the technological arm of 21st century participatory development. That became possible with the emergence of the Linux operating system and the free and open-source software movement as a powerful alternative to proprietary software.

Altruism through technical necessity - the revolutionary potential of the open-source movement

The democratic and community oriented nature of the open-source movement is not just a function of altruism (which definitely motivates many members of the movement) but of technical necessity. It is only through the collaborative efforts of millions of developers around the world that the open-source movement is able to create its software. The fact that the opensource geo-spatial foundation describes itself as a ‘not-for-profit organization...devoted to an open philophy and participatory community driven development’ (https://www.osgeo.org/about/) has a lot to say about the movement’s potential role in the field of participatory urban development.

A quick glance at the remarkably informative maps prepared by the citizens of Kibera slum in Nairobi demonstrates the power of this combination. My personal favourite is a thematic map on the status of safety and security in the slum, which identifies dangerous places through a clustering of “bottles” (signifying alcohol vending spots) and “bulbs” (signifying operational or non-operational street lights). A dense clustering of bottles along with black bulbs (non-operational) suggests dangerous locations. The map is prepared using the opensource QGIS software and uses openstreetmap as its base layer (https://mapkibera.org/download/maps/Security%20map%20final.pdf).

The process effectively combines the depth of qualitative information, which can only be aquired through detailed community mapping exercises, with the accuracy, speed and processing power of modern computing. This is where Cartesian/Euclidian mathematics (what are the “bottles” and “bulbs” if not coordinates on an x-y cartesian plane ?) meets the local knowledge of the community. 

The major distinction between proprietary and opensource software is not that one is generally expensive and the other is mostly free. While proprietary software treats users primarily as consumers of technology (causing stready technological dependence, while simultaneously creating an illusion of being tech-savvy), opensource software encourages users to transform into collaborative developers (encouraging community participation and causing technological empowerment).

Pro-Poor Development goes Big Business - Government in hypersonic mode

Although community organisations around the world have effectively used opensource software to map their settlements and exert their agency, the increasing size, scale and speed of implementation of urban poverty alleviation programmes in the global south creates its own challenges.

In this new context it is no longer enough for community organisations to create detailed maps of their own settlements and prepare their own databases. Now, instead of having to deal with a lethargic, inefficient and apathetic bureacracy, they increasingly have to deal with a highly energetic, quick and effective state machinery, which is flush with funds and operates in partnership with large private sector consultants and technology firms to implement gigantic urban poverty alleviation programs simultaneously across thousands of slum settlements in hundreds of cities. Instead of being a reluctant antagonist, the state has turned into an extremely vigorous ally of the urban poor – in both cases community participation may suffer, but the community mobilisation methods of the past are simply not adequate in this drastically altered reality. Many NGOs and community groups have simply not grasped this altered reality well enough.

The challenge of technological development

Moreover, different slum improvement projects would require different types of maps. A Kibera like self-assessment need not map every single dwelling in the slum, but a land titling project like Jaga Mission of Odisha would require an exact mapping of the dwellings (as land rights are issued based on the area of actual occupation). Discussions on participatory mapping are incomplete if not seen in the context of the overall objective of the task and the technical requirements for accomplishing it.

In Jaga Mission, high resolution images of slums were captured using drones and GIS databases of slums were prepared by private sector technical consultants by digitizing these drone images and linking them to the household survey data. The fact that the mapping of 1725 slums spread across 109 small and medium towns of the state (spread across the 155707 sq.km area of Odisha) and the survey of about 170000 slum households was completed within a period of 7 months gives some idea of the speed, scale and intensity of the exercise. It averages out to the preparation of detailed and accurate GIS databases of about 62 slums every week for 7 months straight. 

Jaga Mission achieved this through a creative combination of high technology activities (such as drone mapping and GIS) with field-based community participation activities. About 24 NGOs were engaged to undertake community mobilisation exercises and create Slum Dwellers Associations (SDAs) in each slum. Both the drone mapping by the technololgy consultants and the household surveys by the NGOs were done in coordination with the SDAs. 

The combination was achieved through standardised operating procedures (SOPs) which, in turn, were based on the parameters of granting land rights as detailed out in “The Odisha Land Rights to Slum Dwellers Act, 2017”. While the slum dwellers did not prepare their own maps, as in the case of Kibera, they played a crucial role in ensuring that the mapping of the slums was done correctly. 


Infantile optimism or realistic pessimism ?

Jaga shows a possible model in which community groups can become partners of large and technology heavy processes and hopefully, in the future, take over of the databases of their slums from the private technology consultants and the government. They can then initiate a Kibera like process in which the maps of Jaga are constinuously enriched with qualitative information with the help of open-source software and technology volunteers. 

This may not appear as romantic as the vision of slum dwellers preparing their own maps using open-source software, but given the present reality of large-scale and high-speed projects, such approaches may be the only ones operationally possible to allow a successful egagement of communities with the developmental blitz. Over time they may even overcome the blitz and achieve the romantic vision at a massive scale – not unlike the way Linux overcame Microsoft.