Privatisation of knowledge

This article first appeared on DNA here.

A LOT has been said about how Malaysian entrepreneurs have weak patent portfolios. Some of the complaints are about the slow approval process versus the speed at which business moves, and also the high cost of registration – particularly if one wants to enjoy international protection.

However, I would like to share my personal views on why I think that patents are a bad thing for entrepreneurship as a whole.

But first, let’s clarify what a patent is. Wikipedia (always useful) says that, “a patent is a set of exclusive rights granted by a sovereign state to an inventor or assignee for a limited period of time in exchange for detailed public disclosure of an invention.”

Simply put, a patent is a government-granted monopoly over an invention.

Now, patents were crucial to the success of the industrial revolution as it was the legal mechanism to reward people who successfully pushed the frontiers of technology, with a monopoly for them to commercially exploit their inventions.

In principle, there is nothing particularly wrong with this, as we’re rewarding the brave explorers who push forward human development.

However, the number of patents filed during the industrial age was very small compared with the thousands that are filed by individual companies today. Mimos alone files more than a thousand patents each year, while large technology companies like IBM are filing a lot more than that annually.

This presents a problem.

Let’s use an analogy to put things into perspective.

Imagine that the entire body of human knowledge is like a huge land. Let’s call this land Pengetahuan [Malay for ‘knowledge].

If you believe that human knowledge is unlimited, then just imagine that there are land reclamation or exploration works going on at the borders of Pengetahuan. When a brave explorer opens up a jungle or discovers some new land, the King of Pengetahuan grants him a leasehold on that plot, which he then fences up, staking his private claim on it.

There are no significant issues when there was unexplored land aplenty and few fenced up private plots. However, with the numbers of patents filed today, you can imagine that travelling around the land of Pengetahuan would be terribly difficult or even impossible, without hitting a fenced up private property. Entering without permission would amount to trespass and we know what happens to trespassers.

In my opinion, this is the problem with patents – it privatises human knowledge. It is nearly impossible to do anything today, whether in academic research or commercial development, without bumping against someone’s patent.

While the hurdles aren’t insurmountable, at the very least, it is a bump in the road that either forces us to climb over the fence or walk around it.

If things continue to be left unchecked, there would hardly be any land in Pengetauan that we can step on without paying rent, or walk through without paying a toll.

If this was merely land, then we could always choose to stay at home and not go on an adventure. However, patents are a fence on human knowledge. So, the analogy starts to fall apart here.

Imagine that an intrepid explorer – Kembara – manages to high jump over a fence and discovers a new land adjacent to the existing one. He can get a grant from the King on that new piece too. Then, he becomes the new lord in his own land.

Unfortunately, to get to his land, he would need to jump over the neighbour’s fence.

With real land, we can ask the government for a ‘right of way’ – to carve a small path into the new land through the neighbour’s – but this is not the case with patents. The only way that Kembara can get to his own land is by paying his neighbour a toll or rent. Otherwise, his neighbour can sue him, or shoot him.

This is where patents start to stifle inventions – particularly those from young startups. Silicon Valley would not be where it is today if the pioneers were surrounded by patent land-mines. Imagine that Kembara got shot at the moment he stepped out of his home. That would have put a severe damper on his adventures.

The present patent system also encourages some weird behaviour. There are even companies today – Patent Trolls – whose business model is built around patently suing other people for patent violations.

They don’t even need to do any research or development but simply buy up existing patents to bolster their patent portfolios.

What is sometimes missed is that a lot of the technology giants have huge patent banks, largely for defensive purposes. Some have suggested that Google’s purchase of Motorola Mobility for US$12.5 billion and subsequent sale at US$2.9 billion, was for its portfolio of mobile-related patents that it needed to defend Android.

The idea behind a patent bank is that if a company gets sued for patent violations by another, they can countersue for other patent violations because, simply put, it is a near certainty that there are some obscure patents in their portfolio that the other company has violated in return.

This business strategy is MAD – mutually assured destruction.

This is often the reason why startups are encouraged to file for patents. However, in order to make effective use of this defensive strategy, a company would need to have a rich stockpile of patents, and patents cost a lot of money to file and maintain.

The only ones to profit immediately from all of this are the agents of the patent system.

So, instead of encouraging Malaysian startups to buy into an ecosystem that will bleed them dry before they can even spread their wings to fly, I think that we can encourage them to embrace a culture of openness instead.

Knowledge does not come through divine revelation. Ideas cannot bloom in the dark. Innovation thrives when there is openness, sharing and cross-pollination of ideas.

Human progress has always been built on the shoulders of giants. Now, imagine where we would be today if said giant decided to swat us off its shoulder.

Bazaar model vs copyright registration

This article first appeared on DNA here.
Disclaimer: I am not a lawyer and this is not a legal opinion. Please seek professional advice.

I WAS driving home one day when my mind began to wander, from some reason, onto the intersection between copyright laws and Open Source Software (OSS), and I began to mull over the consequences of recent amendments to our Copyright Act.

The Malaysian Copyright Act was amended in 2012 with many new sections added, but my focus is on Sections 26A – 26C. These sections create a Register of Copyright and spell procedures to apply for and amend registrations. The combined effect is the voluntary registration of copyright in Malaysia.

The entire OSS ecosystem relies on strong copyright laws to thrive. Without strong copyright laws, we will not be able to propagate OSS. So, these amendments simplify certain things as there will now be a central directory of all copyrighted works in Malaysia.

In principle, copyright vests automatically with the author the moment that the work is created, and this is clearly spelled out in Section 26 of the Act. Unlike trademarks or patents, there is no need for someone to register a copyright for it to exist.

What the new Section 26B(5) says is that all registered copyrights are secured and admissible as evidence in court. This means that there is now a rebuttable presumption that the person listed on the Register is the copyright owner.

From here on, this article is pure speculation as this is uncharted territory in Malaysian law. However, some insight can be obtained from the United States, where similar laws already exist. Over there, you cannot sue in court if your copyright is not registered, even if you do own the copyright. This may prove to be a problem.

As an illustration, let’s assume Mr Rekacipta wrote some software but did not register his work. Then Mr Cetakrompak copied his code and registered it. After a while, Mr Rekacipta found out that Mr Cetakrompak was selling his software. He was unable to sue because he did not register his work. Instead, Mr Cetakrompak sued Mr Rekacipta.

Since Mr Cetakrompak has a registered copyright, he is presumed to be the ‘true’ owner. So, Mr Rekacipta has the burden to prove that he is the ‘true’ owner. If he fails to prove it, then the law presumes that the owner is actually Mr Cetakrompak.

So, all that we need to do is to register our work and all will be well, right?

Then we need to understand how OSS is developed. There are two main models of OSS development, often called the Cathedral and the Bazaar models.

The Cathedral model is similar to a lot of proprietary software development, where the development work happens behind closed doors. It is centrally driven. The software is only opened up at the end of the development cycle when released to the public.

In this model, it is certainly possible to register the copyright as the code is under tight control from beginning to end. So, it is difficult for someone to copy the code and register the software ahead of the original owner; and the software only needs to be registered when it is finally released.

However, the Bazaar model is the exact opposite, where the development work happens in the open. It is usually community driven. An example of this is the Linux operating system, which is worked on by thousands of developers spread across the entire globe with new code ‘released’ practically every day.

Projects using this development model release new code continuously, whether due to bug fixes or adding new features. These projects are most at risk of being copied and registered by Mr Cetakrompak.

With this model, since copyright merely protects a specific expression of an idea, continuous releases of new code may constitute a new work and would potentially need to be registered, particularly if they added features or changed things significantly. For some actively developed projects, this could mean a new registration every day.

Unfortunately, under Section 26A(2) of the Malaysian Copyright Act , no voluntary registration will be entertained unless a prescribed fee is paid. While the fee may be a nominal one, multiply it across the daily release lifetime of the project, and it is no longer merely ‘nominal.’

Unlike patent registration, there is no requirement under the Copyright Act for the registering body to conduct a search of prior art. All that is needed to register a copyright is to submit the proper forms with due payment, accompanied by a statutory declaration.

For community developed projects, most of the developers are not paid and are contributing code on their own time and dime. Asking them to contribute additional money to register their copyrights regularly, would be a little unreasonable.

Therefore, this new voluntary registration ‘requirement’ has the potential to affect the Bazaar model of community driven software development in Malaysia, to our detriment.

System-on-Chip, and what it promises

First published on DNA here.

AT a recent dinner with several friends, I was asked what Snapdragon was. I was tempted to answer: “Trademark,” but decided against making that wisecrack. My friends had a vague idea that it had to do with the microprocessor in their phones, but was that not an ARM processor?

I think that some people out there might also be a little confused about it. Seeing that microprocessors are my little hobby, I tried to answer their question.

The technical term for the Qualcomm Snapdragon is a system-on-chip or system-on-a-chip (SoC). It is somewhat similar to products like the Apple A6, Samsung Exynos or TI OMAP devices also found in modern smartphone devices. There are a lot of SoC devices out there.

An SoC is essentially an electronic device that contains a number of other electronic devices that were traditionally found in separate chips that took up a lot of space on a circuit board. With the magic of modern technology, we can now integrate a lot of different functionality and powerful capabilities into a single silicon device.

Hence the term system-on-chip.

Integrating multiple chips into one saves on-board space that is already very limited in our increasingly smaller phones. This means that we can all look forward to having physically smaller devices as more and more functionality gets integrated into an SoC.

Furthermore, integration improves communication capacities between the different devices, keeping all circuit paths on-chip instead of on-board. Although electrons move very quickly, it still takes a finite amount of time for them to move from one place to another. So, shorter circuit paths will improve speeds and give us a better user experience.

In addition, integration also improves power consumption by having less circuitry overall. By squeezing everything onto a single chip and allowing all the different parts to communicate directly with each other, it reduces the amount of power needed for everything, which gives us better battery life.

Let us not forget that integration also means economies of scale and brings down the cost of manufacturing. Instead of manufacturing multiple devices using different technologies and processes, a fully integrated device could be manufactured at one time, using a single manufacturing process.

Therefore, SoCs are the way to smaller, faster and cheaper gadgets.

A typical SoC would contain one or more microprocessors with a number of other peripherals or devices all tightly integrated within it. These microprocessors do not even need to be of the same type or architecture, but are commonly so for convenience. Some makers couple big microprocessor cores with little cores in order to improve power efficiency.

It is not uncommon these days to find a modern smartphone with quad-core microprocessors and graphics processors integrated into the same SoC. This is where companies like ARM come in – to supply the microprocessors that power most mobile computing devices today.

A microprocessor is essentially the brains of all modern computing devices. But for a brain, it is actually pretty dumb. It is essentially an over-powered calculator, merely capable of performing billions of computations each second and moving bits of data around.

But through the modern sorcery of software, all this calculating and moving bits around actually give us our videos, music, games, the Internet and all other manner of modern entertainment, information and communication capabilities.

So, what about Intel?

Intel is definitely a major supplier of microprocessors, especially for the desktop, laptop and server markets. However, Intel microprocessors have traditionally been sold as standalone microprocessors and not fully integrated SoCs.

Only recently has Intel released a line of Atom-based SoC for use in smartphones and tablets, which has found use by several manufacturers. It is a late entry into the mobile phone market, which is the traditional stronghold of ARM microprocessors. This is the reason why most phones run on an ARM microprocessor.

So, how does all this change our lives?

Since nearly all mobile phone SoC use ARM microprocessors, their computational capabilities are fairly similar with all else being equal. However, the difference in the types of devices integrated into the SoC by different makers mean that different devices will have different features and capabilities.

However, unlike traditional PC markets, all this tight integration means that the consumer loses the option to mix-and-match capabilities. We cannot pick and choose different microprocessor, graphics, radio and other capabilities. Consumers are only left with using the set of capabilities that the device makers have decided to put in.

One does sometimes wonder if the world would be a different place if the smartphone market developed like the PC market – where one could assemble custom hardware and load it with any software they liked instead. Wouldn’t it be great if someone out there provided the tools for consumers to easily assemble their own SoC?

It seems that in our pursuit of having smaller, faster and cheaper devices, we may have sacrificed some personal freedom. If that does not matter, then it’s a non-issue.

But if it matters, supporting product ideas like Project Ara, Phonebloks and Neo900 would be a good start as it would tell device makers that consumers want more freedom. And to take things further, supporting open-source SoC efforts like ORPSoC, Milkymist and T3RAS would be even better.

Disclosure: Milkymist was originally powered by my AEMB2 microprocessor and T3RAS is the upcoming SoC from my company.

Bad Computer Science Lecturers

This article first appeared in DNA here.

IN my previous column, I asserted that our local Computer Science (CS) programmes were efficient generators of garbage. Lecturers are a critical part of the process and are also one of the nine areas of programme quality monitored by the accreditation process.

Academic staff quality is accredited in various terms such as staff-to-student ratio, staff development programmes, staff balance, merit recognition, equitable workload distribution, appraisals and awards, etc. All these human resource matters leave much to be desired and do not address real staff quality.

Technically speaking, the role of a lecturer in tertiary education is actually quite limited. University education is about independent learning and the job of a lecturer is not to teach but to facilitate learning and more importantly, to inspire students.

Here, things fall short again.

Many lecturers have little to no industrial experience, often joining the profession right out of school and earning their graduate degrees along the way. There is nothing inherently wrong with this. But these lecturers rarely inspire as they have neither war stories to share nor battle scars to show.

More importantly, they fail to bridge abstract theory with real-world practice. Someone without any experience working in a software team on a product would have trouble instilling good development practices as they would have difficulty communicating the complexities involved.

There is a saying that goes: “those who can, do; those who can’t, teach.”

There are some who belief that modern Computer Science graduates should focus on and work at a higher level of abstraction that is closer to the problem. However, I feel that fundamental knowledge about the inner workings of technologies will ultimately result in a better grasp of the higher level problems.

Unfortunately, lecturers with weak fundamentals cannot help unravel the deep complexities of modern technology. They lack the capacity to facilitate learning and are unable to impart knowledge. These lecturers often read directly from books or slides as they have little knowledge of their own.

This becomes a vicious cycle with each subsequent generation of graduates knowing less than their predecessors, while having to handle increasingly complex problems. This is ultimately not sustainable. To cope, graduates must think fast on their feet and learn on the job, which require thinking skills.

However, many do little to promote higher order thinking. Some pay lip service to Bloom’s taxonomy by setting exam questions in a certain way, using certain terms, to get students to think different. But this actually discourages thinking as students simply evolve better pattern-matching skills.

Students are rarely encouraged to challenge the lecturer during lectures, tutorials are reduced to tuition classes for tackling exam questions, and lab sessions turn into a set of procedures to be rigidly followed. University becomes a mere extension of high school.

I am not even going to go into the culture of giving ‘tips’. Examination tips seem to have become de rigeur in lecturer-student relations at universities. Some lecturers are guilty of giving tips and some students are equally guilty of being dependent on tips.

That said, modern-day lecturers are caught between a rock and a hard place. There is a lot of pressure being placed on them as they are sandwiched between the students and the university management. In order to satisfy both sides, academic quality is often sacrificed.

On one hand, university management needs to maintain student numbers and increase profits. While this is primarily an issue in private universities, public universities are also being encouraged to increase revenue generation, to become self-sufficient and to reduce their dependency on public funds.

On the other hand, students want to get by with as little effort as possible, preferably without having to go the extra mile to learn. While it is only human to choose the path of least resistance, years of rote learning in schools and instant gratification in their lives have made things worse.

Woe upon the lecturer who challenges and fails an entire class. Very few students would want to attend a university that is infamous for failing students. Lecturers are asked to justify high failure rates and stand to lose their jobs if their programmes are cut due to insufficient student numbers.

Hence, the safest way to satisfy everyone would be to allow students to coast through the course with the barest minimum of ‘standards’. Ensuring a high pass rate would keep both students and university management happy, and lecturers get a sense of job security.

I’d like to suggest that we reduce pure academics at universities and recruit more people with real-world jobs who have a passion to teach. These people can bridge theory with practice and are less concerned with job security and tenure. I would even go further to suggest the recruitment of candidates outside of traditional Computer Science backgrounds to further enrich the programme.

I know that this is a problem within the Malaysian framework but it is something worth looking into. For starters, it might be useful to get these people in as tutors first – to run tutorials and lab sessions in their individual ways.

It would be a win for everyone involved – tutor, lecturer, student, university and the nation.

Bad Computer Science Programmes

This article first appeared in DNA.

I sent drafts of my previous article to the Deans of several local Computer Science (CS) faculties for comment. One expressed reservation in labelling it bad CS programmes as they are all accredited by the Malaysian Qualifications Agency (MQA). Therefore, none of them are bad per se.

I would have to humbly disagree.

But first, I browsed through several documents – the Malaysian Qualifications Framework (MQF)i, Code of Practice for Institutional Audit (COPIA)ii, Code of Practice for Programme Accreditation(COPPA)iii and the Programme Standards for Computing (PSC)iv, to gain an insight into how programmes are accredited.

Briefly, the MQF provides the structural classification for all academic programmes in Malaysia such as the levels of qualifications from Certificate to Doctoral levels, with their associated learning outcomes, credit system and other criteria.

When the Code of Practice documents are read together, they constitute a check-list of items that must be furnished and questions that must be answered when preparing accreditation documents. They include standard forms that need to be submitted with guidelines for completing them.

The PSC specifies details for each course to fulfill these requirements. Among other things, it also suggests credit hours allocated to various compulsory, core and elective modules; and also lists down the core knowledge areas to be covered by the syllabus.

Taken together, these documents form a complete template for running quality academic programmes, except for one minor detail.

In my opinion, the trouble with accreditation is that the process of tertiary education is largely treated like a production process for manufacturing graduates. As a result, graduate quality becomes tied to process quality. A good programme is then one that has good processes in place.

Of the nine areas of programme quality, only one is concerned with curriculum. Rather, it is mainly concerned with the processes by which the curriculum is designed, monitored, modified, etc. The other area of student assessment suffers from a similar deficiency – being largely process focused.

The accreditation process itself is time and resource limited, often conducted by senior members of academia who are likely busy with many other commitments. With so many quality areas to cover, it comes as no surprise that curriculum content does not necessarily get the coverage that it deserves.

The curriculum itself often has minimal industry input, often only via a small advisory panel meeting annually. The PSC document itself was largely authored by academics. We cannot expect lecturers, who often have limited to no industry experience, to comprehend the needs of the industry.

Unfortunately, some schools make the other mistake of tailoring their curriculum to fit specific industry needs, producing job ready candidates that often focus on tools rather than fundamentals. These graduates will make perfect hires for one, and one job only.

Furthermore, accreditation cycles occur every few years. In an industry where the life-cycle of a technology could be mere months, we cannot expect traditional academic programmes to keep up. By the time a new syllabus clears the necessary hurdles, it’s already out of date.

Curriculum is the core input to the whole process – garbage-in, garbage-out.

As a result, we have universities running high quality programmes with efficient processes that churn out garbage. While I may have singled out MQA in this column, it’s by no means the only culprit in the system. If it is blame that we want to assign, there is more than enough of it to go around.

One may argue that having an accreditation process in place, flawed as it may be, is far better than a free-for-all where any college can offer bogus degrees. The flip-side to this is that a flawed accreditation process risks giving a false sense of security to students who sign up for the programme.

However, my concern is with the curriculum, not the accreditation process.

In the spirit of peer-to-peer, I would suggest that we flip the whole curriculum equation from one where the academics decide on content based on stake-holder input, often leaving out the largest group of stake-holders in the process, to one where the students get to decide what they want to learn.

Instead of a model where a fixed curriculum is shoved down our collective throats, let us turn our universities into places where students can directly take part in the continuous evolution of the syllabus and have the freedom to tailor their learning based on individual needs.

This shifts the burden to the undergraduates themselves in the hope that instead of being mere passive receivers of knowledge, they would take responsibility for their own education, becoming active learners, and be forced to think about and chart their own course.

A silly idea, perhaps. However, it is the future of education today.

At the risk of incurring the wrath of some close friends and former colleagues, I will talk about the people who teach undergraduate programmes in my next column.

One Too Many Computer Science Programmes?

This article first appeared on Digital News Asia here.

I started writing this article, wondering if there were one too many Computer Science (CS) degree programmes in Malaysia. My gut feel was that there were definitely too many CS programmes in Malaysia but I needed to get my facts straight first.

Looking up the institutions listed in the latest MQA Rating System for Higher Education Institutions in Malaysia for 2011 (SETARA’11) ratings, I found that the vast majority had actual CS degree programmes advertised on their websites while others had CS-related ones (e.g. Computing, Computer Engineering, etc) with the exception of a few specialist universities such as medical and teaching ones.

So, it is safe to say that almost every university and university college in Malaysia has one. This does not surprise me as CS is a relatively cheaper course to run, with less capital expenditure needed for physical infrastructure, unlike some other engineering or science programmes that need expensive lab facilities.

While there is certainly no lack of choice for anyone interested in earning a CS/CS-related degree in Malaysia, I asked myself whether these programmes were meeting market demand. If there were too many CS programmes, one would imagine that there would be a large number of unemployed CS graduates as supply exceeds demand.

Based on the results of the Graduates Tracer Study for 2011 released by the Ministry of Higher Education (MOHE), ICT graduates were no more employable nor unemployable than their peers from other fields. Out of every four ICT graduates, one remained unemployed after graduation while two others were fully employed within a 3-4 month period.

According to the ICT Job Market Outlook report released by the National ICT Association of Malaysia (PIKOM) in 2012, the future for ICT graduates seems exceedingly bright. The average salaries for ICT professionals have been steadily rising and that, overall, only the oil & gas sector pays more on average.

One would think that people should be clamouring for places due to the above average pay and the growing job market. Therefore, it did not make any sense that there were still so many who remained unemployed after graduation. Also, according to the same report, ICT job numbers grew at an average of about 27,000 a year over the 2005-2011 period. Something must be amiss.

However, the report also claimed that the quantity of graduates is declining with just under 75,000 enrolments in 2011. With about 27,000 jobs being created each year, there are only enough vacancies to employ less than half of those graduating. Therefore, it is natural that most of those graduating will not be able to find employment in the industry.

Furthermore, the report suggested that not all these graduates end up working in the computing line as there are surely those who end up joining other industries. However, the fact that a full quarter of ICT graduates had difficulty securing employment was a rather alarming figure to me.

While there may not be enough jobs locally, our graduates could surely gain employment elsewhere. In fact, some of these graduates do eventually leave our country for better opportunities elsewhere and the ones who leave are also inadvertently more fluent in English, as is highlighted by the report.

The said report also highlighted that the quality of graduates is declining, and the MSC Malaysia Talent Supply-Demand Study 2010 – 2013 said that less than 30% of employers believe that their fresh hires are of good quality. While our job market may be growing, our graduates are less capable of meeting the requirements of the job. As a result, they are fast becoming unemployable.

From both the studies and the report, I have to say that we have an acute problem. While it is not difficult to hire people, it is very difficult to hire good people, which is corroborated with the situation on the ground. What the entire industry sorely needs is brains, but what we get are mostly bodies. For some reason, our universities are not graduating the right kind of people.

I have to also point out that it is not the duty of a university to produce job-ready products for the market but to nurture critical thinkers and creative doers. Unfortunately, the same report mentions that our graduates are lacking such traits amongst other things, and recommends that our government review the entire education system.

Therefore, the real problem is that we have one too many bad CS programmes in our country.

Naturally, after confirming that a problem exists, the next question that I’d ask myself is this: whose fault is it anyway? In my next column, I intend to look at where the faults lie – systemic flaws, teachers’ failures, student apathy, and the role of parents and industry in all of this.