17.7 C
New York
Thursday, September 29, 2022

Will Low-Code and No-Code Development Replace Traditional Coding? – Slashdot

Slashdot is powered by your submissions, so send in your scoop




The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
No.
The problem with these low/no code solutions is they bring a severe case of vendor lock-in, in addition to limiting who you can actually hire when people quit. Not only that but they don’t scale particularly well, and the more you automate the more your recurring licensing costs go up.
I’d be curious to see a cost analysis done by somebody that isn’t offering these services to see if it even saves money.
Most office automation software consists of only a handful of different screen types. If you can use this to handle that sort of internal software, it saves you a lot of money.
Don’t use it for the wrong type of problem, that’s expensive. But otherwise it should allow you to focus on the business problem you want to solve, instead of the tech.
The they will have to hire consultants to fix all the crap code generated by all these ‘solutions’. Just like 15 years ago there were going to be all these RAD software code that generated UI’s. Except the ui’s were almost worthless because they couldn’t even do basic input validation. I’ve been doing this 30+ years and keep getting threatened with this crap every 10-15 years. Someday if AI gets good enough, but then all the developers will be developing AI code, and what we do today will be just generated. What we think of code now will shift left and developers will develop other things. I think we’re a ways off yet. AI needs to get better.
Exactly. Code isnâ(TM)t a problem to be solved unless youâ(TM)re paying coders. Someone still needs to express what needs to be done coherently enough to do something useful, and codeâ(TM)s just a tool to get a computer to do that. Take the code away and youâ(TM)re still left with the need for someone to describe the task in enough detail to get something useful out of our new AI overlords, and that someone isnâ(TM)t going to be the ones who think theyâ(TM)re paying too much fo

No.

No.
A professional coders hands typed this.
Will AI-generated code replace 100% of all programmers’ coding? Indeed, no.
BUT! I will very definitely 100% sure complement coding.
Over very long time (not the immediate future) AI could eventually replace some of the very low-skilled entry jobs.
Just look at what AI has done with transaltion (you still need translators nowadays. But DeepL and Google Translate can make sufficient draft translate. And they also enable Joe Random to quickly translate a few simple sentences without veering too much into delight
If you lose the low skill jobs you’ll lose those that one day has built enough experience to become high skill persons in demand.
ever stop asking such dumb questions in Slashdot headlines?
“Every developer will have their code generated”… Well, that’s nice that AI is so good that it can do assembler, deal with real time race conditions across multiple real time threads, figure out how to analyze protocol problems, and analyze core dumps.
The problems with all generated code I’ve ever seen, is that it’s bloated, often too generalized, relies on libraries, and really doesn’t understand what you were trying to do. But I’ve seen people excited about it too, sometimes exuberantly irrational. Ie, on an embedded system with hard real time constraints, one programmer said “this tools great, it only has a 100% overhead!”
There have been overhyped fads making the same claim since the 1980’s. They’re all dead and barely remembered today. They wouldn’t be remembered at all if not for the herculean efforts required to repair the damage after a few c-suite types saw an ad in their golfing magazines and drank the cool aid.
Now we have people wanting to use the same fake silver bullets on a technology they want to make into the backbone of our economy in spite of a series of spectacular failures over the last year that should be taken as cautionary tales.

The problems with all generated code I’ve ever seen, is that it’s bloated, often too generalized, relies on libraries, and really doesn’t understand what you were trying to do

The problems with all generated code I’ve ever seen, is that it’s bloated, often too generalized, relies on libraries, and really doesn’t understand what you were trying to do
You can waste a lot of time trying to persuade the code generator to generate the code you want. The time would be better spent just writing code, Chaps in the IT department at work, who ought to know better, put together a stock control system using Python, plus some clever framework. Development zipped along, until we tried to implement the detailed requirements for our business. The developers ended up fighting the framework. Some things couldn’t be done, or if you could do them, would be dangerous, as i
I see a problem sometimes where team members feel quite productive doing work that is pointless. Including making frameworks, rewriting frameworks, doing a lot of busy work on tools that don’t get used, etc, rather than doing work that contributes to the goal. And often they sneak by this way for a long time because they look so busy…
I work in embedded systems. Originally, I was on the firmware side; now I’m on the hardware side. Right about the time I was making that transition, a person who I considered to have a pure software focus talked with me about what I considered to be a very simple algorithm in a recent product, saying, “I want to replace that algorithm with a configurable framework so that engineers can adapt it to new products without having to write or modify code.”
That just blew my mind. This person wanted to write a l
Perhaps, but that doesn’t sound good enough for a hard real time system. Which was the given context.
To guess how good it was in other contexts, one would need to test it in those other contexts. Certainly Python is now good enough to be used in lots of places where before only C would work. This is partially due to faster code, but largely due to faster hardware. And lots of optimized C libraries which are better in their special area than the average programmer could do.
Along this same line, many STL
The fact is what I can do today in a few lines of python is fantastic. People complain how hard it is to code Apple devices, but if you look at the density of what the code does, it is amazing.
The fact is I know few programmers who are not dependent on low code solutions for development. The fact the economics of software development does not allow for anything else. We need too many coders and cannot demand the skill we once could. Most are not paid what we were once were.
It is like the French walking shoes I used to buy. For years they were hand made and worth every penny. Now that the company has grown, the economics means they are just hand finished. Hand made code is not useful. The human knowledge and skill has been automated.
Those are not “low code” solutions. The repeated forever claim is that low code/no code will allow joe random sales guy to produce custom computer programs using some kind of AI.
You are talking about simply more powerful and expressive tools that can make a skilled programmer faster.
A nail gun in the hands of a skilled roofer can get a 3 day job done in a day. It can also nail an idiot’s hand to the roof in an instant (I have seen that happen).
… will not totally replace coding but some amount of no coding will eventually be viable via AI and programs that are tractable via ai / augmented coding for non technical users.
The timeframe for this will be 100’s of years at least to get to non trivial programs.
Programming in a modern form is roughly 70 years old. 100 years ago, programming basically didn’t exist.
You really think that within another 100 years of progress (across machine learning, CPU speed and capacity, memory amounts, scaleability, etc.) there will be no progress in “no code” or “low code” over where we are today? I find that a remarkably arrogant and myopic statement.
Google jacquard loom. Programming of a sort existed in the 18th century.
I’m slightly familiar with the loom (and similar tech is–or was just a few years ago–still used). You could even go so far as to point to the design of the antikythera mechanism as a form of programming as well. I don’t mean to diminish such things, as they are completely awe-inspiring. Mechanical computers are awesome, and I would love to see a renaissance in that technology!
But, modern programming with written code, the combination of instructions and data, etc., is very different.
OK. And that form only dates back about as far as FORTRAN does. The I believe the v. early 1950s.
Assembly coding is really a different kind of beast. (Or at least it used to be.)
Thinking a bit more I guess it probably isn’t different anymore. I haven’t looked at a modern assembler, but at the time I left they had started including things I remember as being like recursive macros, so it may now be the same kind of beast. I think it was toward the end of the 1970s that CDC started thinking of using APL a
Interesting point! Imagine someone from the era of plugboards looking at a modern optimizing compiler. Then imagine the same level of improvement again.
Hasn’t this same question with slightly different wording been pumped into the semi-technical/functionally-illiterate media every year since at least 1970?
In that year I recall reading in Datamation (premiere magazine in its time) how programming as a job would soon be obsolete except for a magical few who would write the programs that will automatically write all the programs we will ever use.
Some things people just don’t seem to learn.
Pretty much this.
There are going to be a number of people who are going to be able to do what they need by low code or no code, but there are going to be exceptions to the rules which need to be made. Because the presented tools don’t address all needs.
Though not as common place as they once were, COBOL developers still exist. C Developers still exist, and Assembly Developers still exist.

However, you still will need regular dev work and coders.

However, you still will need regular dev work and coders.
Code suggestion may reduce the number of cities in which coders will be able to find jobs. People fresh out of university with a four-year degree in software engineering might have to work a job unrelated to their degree for a year or two in order to save up for their own relocation to a much higher cost-of-living part of the country (or even to another country) just to be able to use their degree.

People fresh out of university with a four-year degree in software engineering might have to [relocate to work].

People fresh out of university with a four-year degree in software engineering might have to [relocate to work].
There’s this thing called “remote work” that’s all over the news recently. Even on Slashdot.
“Back to office” mandates from management have been all over Slashdot as well.

“Back to office” mandates from management have been all over Slashdot as well.

“Back to office” mandates from management have been all over Slashdot as well.
Ok, so if both of those are true, then then are remote jobs. If that’s difficult to understand, you may have to relocate to use your CS degree.
Based on the weekly stories on here about software going kerplunk or kerflooey, I thought this was already in place.
I don’t know why anyone gives Gartner or IDC future projections any attention. They do an adequate job of collating current data (e.g. market share), but when they try to apply ‘insight’ to that data they are pretty useless. I don’t know if I’ve ever seen a specific projection of theirs come to pass as they predicted.
Just for fun I googled and saw IDC put out this gem of an ‘insight’:
‘88% of IT decision-makers agree that data has the potential to fundamentally change the way they do business.’
Wow, what an utterly obvious and useless sentence to dress up as insight…
I remember back in the day when management brought out some 4GL package that was going to allow the business analysts to generate code. “You’re going to be out of a job soon…”. Two weeks later, I was asked to help the analysts. They had an intractable problem that four of them had been working on for the two weeks. It turned out to be a nested in-then-else statement that took me less than 10 minutes (that includes understanding the problem) to implement.

It’s called Excel.

It’s called Excel.
Exactly! This is in the tradition of Excel. It allows some non-job-titled-programmers to automate some things. It also generates consulting work for real programmers. Not to mention everyone and their cousin’s dog wanting to integrate with whatever the Excel happens to be.
Please mod parent up!
“employees who build business apps for themselves and other users”
I’ve never seen that turn out well. John Doe may know his business but that doesn’t mean he knows how to structure it logically to get what he needs.
And the moment John Doe puts in sufficient work to understand how to properly structure the problem, and how a solution should work, he’s probably learned how to become a programmer.
Agreed! I’ve seen amateurs make Grand Spaghetti with Excel+VBA, MS “Power Apps”, and other RAD-ish tools. Then when they are promoted or leave, they dump maintenance on IT and say, “You guys are pros, so figuring out my [spaghetti] should be easy for geniuses like you!”
I haaaate that! Spaghetti prevention is a large part of what our experience is about. They don’t understand the field, period. There’s an applicable saying from the moving business: “Amateur movers make professional dents”.

“employees who build business apps for themselves and other users”

I’ve never seen that turn out well. John Doe may know his business but that doesn’t mean he knows how to structure it logically to get what he needs.

“employees who build business apps for themselves and other users”
I’ve never seen that turn out well. John Doe may know his business but that doesn’t mean he knows how to structure it logically to get what he needs.
Yep.
Either the marketer crashes and burns, or they become a programmer of a sort – with crippled and non-standard tools. Neither is a very good outcome.
Your comment reminds me of an anecdote a professor once told our class. One poor soul was whacking away at his program and it just wouldn’t yield the correct output. He goes to the professor to complain all the numbers are wrong. The professor thinks for second and pulls out an enormous listing from a cabinet and thumps it on the desk. He points at it and says, “all the correct numbers are here”.
Now all you must figure out is what is the correct input. We’ll assume the AI can guess correctly at the output.
They’ve been trying “RAD” for 30 odd years. I have not seen any revolutionary idea yet that solves the problems prior RAD attempts couldn’t. The current tries merely recombine old RAD ideas in (allegedly) new ways, but the old RAD flaws are still in there. Customizing the app, and code versioning/management/packaging is one key area they struggle with.
Also, RAD tools that optimize for quick learning tend to contradict features needed for productivity of those already skilled. It’s hard to favor quick learning and skilled speed coding & maintenance at the same time.
That being said, I believe the best promise is “table oriented programming” (TOP). The vast majority of field and UI navigation info can easily be stored in tables (the RDBMS) where we can use relational math and query screens to do most management and adjustment of such attributes.
The trick is to be able to override the generated markup or defaults via code of such a system for local customization as needed. I’ve been experimenting with “incremental rendering” where specific widget rendering is combined into medium-sized units, such as panels, and then panels are combined into pages/windows, in a fractal kind of way. One can intercept any stage in between to customize with code as needed via event handlers. This also applies to database I/O. That way table-ized attributes can do 90% of the grunt work and it would be easy to tweak the 10% with code for customization.
An advantage of TOP code is that event handlers don’t have be forced into a file tree: you can can query them to virtually group them any damned you please for reading & editing (you can also add your own sorting tags/attributes). The grouping is not pre-forced on you like say MVC is. I’m confident TOP is where the future of coding is, but I seem to be the only one curious enough to do the R&D needed for it to become practical.
Code trees just feel obsolete, just like IBM’s hierarchical IMS databases started to feel limiting in the late 60’s, resulting in the invention of relational. Hierarchies just don’t scale for complex interweaving grouping & searching. They force you to select one factor as king over other valid grouping factors. (Note that for pre-compile, a source file tree may still be machine-generated, but with TOP the coder generally manages code snippets via relational queries or query-by-example, not through file trees.)
Something non-TOP-related that would also help with CRUD/biz apps is a state-ful GUI markup standard. Reinventing real GUI’s via JS/DOM/CSS has been a bloated buggy ever-changing mess.
Minor corrections and clarifications:
> is one key area they struggle with.
Should be “two key areas…” (customizing & code mgmt.)
> for local customization as needed.
“Local” here generally means specific app areas such as specific screens or fields that are exceptions to the default or app-wide rule/pattern, not necessarily geographical or organization locality.
> One can intercept any stage in between to customize with code as needed via event handlers.
One could in theory intercept both an attri
For now and the forseeable future they are just shipping code other people have already written with some contextual changes. It’s a fancy templating system.
Just like having Off-the-shelf logic chips and other Integrated Circuit chips available for various purposes such as counters, CPUs, RAM storage units, etc does not eliminate the need to design electronics.
This can help speed up some dev process, but templated code even with AI assist in adapting the code still needs to be edited to make it right, and correct, and integrated. The person putting the code in still has to be able to understand the code and whether it is correct and accomplishing what is needed properly and efficiently.
There is a huge demand for software – far exceeded the number of competent developers. Frameworks enable the…um…less competent to produce things like websites. They can also allow good developers to crank out a lot of uninteresting stuff quickly.
There are obvious disadvantages. However, we come back to the high demand, which *will* be satisfied one way or another.
But even then, it’s usually very bad.
So a partner had a ‘low code’ offering, that was fairly typical, but geared toward our specific industry. So as proof point, they wanted one of our clients to have a meeting and as proof for how awesome the platform was, they would have the customer prepare a scenario for them to create and surprise them on a call, and I was there to consult as it was my company’s API that would be targetted, which they had never targeted before. The point was that despite not having any experience with our API, and getting a requirement on the spot, they’d have a web control for the customer use case on the spot.
So we did the call and the customer gave what I considered a softball scenario (create a dropdown of choices from a single api), one that would be just about a few minutes of scripting. So I gave the api entry point that was relevant, they drew a couple of boxes, eyeballed the result, created an expression that jq could use to extract list from json, and they were almost done… Except the requirement was for it to strip the last character of each element in the list, and despite having one of the developers of their solution on the call, it took them over 2 hours to figure out a way to do a simple removal of the last character (and incidentally, a surprisingly large volume of code, written in their proprietary language that comprised the ‘low’ part of their low code).
So the customer did not buy the partner’s software, because they expected it to be a softball, they were willing to entertain, but it was so much harder than just writing a bit of code.
This is the problem with low/no code environments, it’s very easy to find a case that’s ‘off the rails’ and then it’s all hell. Even if you are all ‘on the rails’, the interface tends to be a bit tedious. You are essentially playing with flow chart instead of code, and that can be surprisingly annoying to do.

This is the problem with low/no code environments, it’s very easy to find a case that’s ‘off the rails’ and then it’s all hell. Even if you are all ‘on the rails’, the interface tends to be a bit tedious. You are essentially playing with flow chart instead of code, and that can be surprisingly annoying to do.

This is the problem with low/no code environments, it’s very easy to find a case that’s ‘off the rails’ and then it’s all hell. Even if you are all ‘on the rails’, the interface tends to be a bit tedious. You are essentially playing with flow chart instead of code, and that can be surprisingly annoying to do.
Very much this.
The simplified environments are wonderful when you’re only trying to do what they’re designed to do, out of the box. And while those capabilities are usually sufficient to produce a ton of flashy demos and tutorials, you’re going to exceed them in the first five minutes of trying to implement something real.
The solution is to have an environment that makes it easy to do that 90% for which the pre-canned approach actually is good enough, while still allowing you to drop down and “code for rea
I’m not sure how to parse your last sentence, but I’ll assume it’s some expression of oddness at my scenario. In my scenario, there were weird politics at play.
Some people at the potential client wanted my company’s solution, but our competitors in the market had the stronger brand recognition, and thus their decision makers were leaning away from entertaining anything from my company. However, this ‘low code’ company had what was viewed as a well-liked brand, and the other companies had declined to work w
Betteridge’s law of headlines applies.
If the low/no code tools can then be used to expand their own functionality and create new low/no code development tools. Otherwise, someone still has to be able to do the “full” code version to make new low/no code tools.
It depends on who you ask.
If you ask the people who’s jobs consist of manually generating code, then the answer is: “No. Of course not. And we will do everything necessary to undermine your efforts.”
Others will say maybe or yes. It’s basically a problem of machine learning. You are teaching a machine to translate requirements expressed in (ambiguous) human languages to machine instructions, with very exacting behaviors. It’s more of a problem of man-machine communications and dealing with the imprecision of human languages and knowledge*. Once you’ve gotten past that, all you need is a fancy compiler.
*Specifically, how do you deal with humans that really don’t understand exactly what it is they want? Or even what their job is?
If you can specify precisely and unambiguously what you want your program to do, you have written code.
If you write them down, you’ve written specifications. There is still a rather large gap between specifications and a working solution, which is the gap that “low code” is aiming at.
Writing them down in English (or any other non-technical language) is not generally sufficiently precise and unambiguous.
It may replace some no-clue wannabe coders though.
Code can not be generated unless detailed requirements are specified and specifying detailed requirements constitutes coding, although the language may over time be different / higher level than Java or C++.
https://imgur.com/4DyqABX [imgur.com]
There is nothing wrong with Labview.
I have implemented a number of projects, some of them quite complex, in Labview. The encapsulation of NI gear is great as well.
Not that I would use it for everything. I prefer C and C++, actually, or python.
It is evolving, with good use typedefs and auto and other modernized loop support the suckiness of the syntax is greatly reduced. I see already people with no idea of balanced binary tree or hash function collision casually using std::unordered_multiset list gangbusters.
And the new crop of graduates the universities are turning out use very complex data structures using python and Jupyter and Pandas.
RAD solutions don’t scale. If you want something fast and low code it isn’t going to be optimized for any particular usage pattern, and, not knowing the use cases for potential growth of the application, the RAD tool won’t be able to decouple the logic and data layers appropriately to allow for scalability later.
I’ve personally seen this happen dozens of times over 20-30 years. You have an Access backend originally dealing with a few hundred part numbers, then a process changes and it has to deal with tens
Every eight years, something like this comes along. Sometimes it’s a new code-free way to code, such as drag-and-dropping widgets. Sometimes it’s a new methodology, such as Agile (i.e. no architecture) or Pair Programming or “Open Concept Offices.”
And sometimes they do improve things. Never by quite as much as the salesweasels and consultants claim. But Agile does help expose a project going off-the-rails sooner… as long as you still waterfall (or “spike”) the initial design and architecture. And Pai
It ain’t going to happen this time either.
At some point, you need something that a real programming language needs. By the time you add all that stuff, your traditionally programming.
This is already here in a lot of ways, with voice assistants, configurable things like IFTTT, etc. Sure, those do simple isolated things and not big complex applications. But they can easily do a task with no coding that in the recent past would have required writing a couple dozen lines of code. And it seems pretty clear to me that these things are building rapidly towards replacing things that would have been full-blown applications until recently.
As a programmer-who-got-promoted-into-management, I really
I have been playing with GitHub copilot and have also been reading about others testing it. One of the things I have noticed is that it helps experts a lot more than it helps novices. These tools are good but they need to be given a direction. When writing lots of small functions which are very narrow in scope these tools do an amazing job.
However, what I have noticed is that when novices use these tools they are not specific enough and tend to write master function which these tools do a horrible job of ge
No, low-code/no-code won’t replace full-on development for one simple reason: behind all that low-code/no-code front-end is… a whole lot of code. LC/NC makes it easy to arrange and connect components that already exist. It makes UI design and implementation easy. But it can’t create new components that don’t already exist. None of that new stuff is trivial either, you wouldn’t believe the amount of logic needed to prepare printed menu boards for a coffee shop chain.
I’ve used a bunch of “low code” environments.
They’re wrongly named. What you essentially have is a bunch of ready-to-use code that you just plop in via some kind of UI instead of through a function call. For the core business functionality, you still write code.
To me, there’s no line between them. Depending on my use case, I’ll use this or that. Even with most programming languages these days, you’ll use a framework for whatever you’re doing. low-code environments are essentially frameworks with a GUI.
No-co
As has always been the case, low-code / no code tools can do useful work in specific niches. For example, many CRM tools have low-code solutions that make it easy to build customer intake forms or other data entry screens related to sales. Many web hosting companies offer tools to make it easy for mom-and-pop restaurants or other shops develop a basic web site. Shopify and PayPal offer tools to quickly build online stores and payment systems.
As long as you stay within the core feature set of the tool, you’r

Will Low-Code and No-Code Development Replace Traditional Coding?

Will Low-Code and No-Code Development Replace Traditional Coding?
Will masturbation replace sex?
[ Oops! Forgot this is /. … “Sex” is … ] 🙂
No
In the 1970’s an earlier, business application would have needed some COBOL programmers to put it together.
By the 1980’s a significant number of daily tasks could be automated by a layperson using spreadsheet software and the right set of macros and equations.
Have prostitutes replaced wives?
Do you want it quick and dirty or do you want it made with love?
Back in the day, things like COBOL and SQL were overhyped as things that non-technical managers can use to get some/most of the development work without involving programmers
COBOL had English like syntax, and therefore was assumed to be usable by non-developers.
SQL was supposed to let the manager get the reports they way without having to task someone in the Information System department (yes, that was one of the monikers that morphed into IT) to write them a report.
None of the above materialized, and m

COBOL had English like syntax

COBOL had English like syntax
has, there is still a lot of legacy code being developed and maintained in COBOL.

SQL was supposed…

SQL was supposed…
Partially true, [learnsql.com] I remember the hype and in the 80s a lot of DBMS systems had their own query language, it wasn’t easy to integrate data between systems and standard reporting software would only work with some vendor’s databases.

When Ray and I were designing Sequel in 1974, we thought that the predominant use of the language would be for ad-hoc queries by planners and other professionals whose domain of expertise was not primarily data-base management. We wanted the language to be simple enough that ordinary people could ‘walk up and use it’ with a minimum of training. Over the years, I have been surprised to see that SQL is more frequently used by trained database specialists to implement repetitive transactions such as bank deposits, credit card purchases, and online auctions. I am pleased to see the language used in a variety of environments, even though it has not proved to be as accessible to untrained users as Ray and I originally hoped.

When Ray and I were designing Sequel in 1974, we thought that the predominant use of the language would be for ad-hoc queries by planners and other professionals whose domain of expertise was not primarily data-base management. We wanted the language to be simple enough that ordinary people could ‘walk up and use it’ with a minimum of training. Over the years, I have been surprised to see that SQL is more frequently used by trained database specialists to implement repetitive transactions such as bank deposits, credit card purchases, and online auctions. I am pleased to see the language used in a variety of environments, even though it has not proved to be as accessible to untrained users as Ray and I originally hoped.
This “lo/no coding” is happening since we stopped using the toggle switches to bootstrap the mainframe or maybe even earlier. The “coding” abstraction gets higher as software get more complicated but the creative contents remains. Even then, sombeody stil needs to write the bootstrap code.
If you believe this I have a CASE [wikipedia.org] to sell you. It’s like Fusion, it’s only 30 years away.
For simple common configurations, low to no code can work. However, it is seldom Computationally Complete. If it is, it is inherently too cumbersome to be practical. Decades have passed with various attempts at low to no code having never taken off.
Furthermore, making coding easier is not the problem with making complex logic engineering available to the broader masses. Rather, it’s their fundamental ability to conceive of systems. From experience, I’ve noticed that those better and tinkering, decipher
Also, hahahahahahaha…..heehee.
Also, clueless managers will spend billion on these boondoggles trying to save money.
Regardless of the tools used, creating software is hard/ These tools might allow faster creation of trivial code but complex systems are difficult, whether they’re made of code or anything else like logistics or factory layout
Not much else to say really.
There may be more comments in this discussion. Without JavaScript enabled, you might want to turn on Classic Discussion System in your preferences instead.
XKCD Author Finds Geeky Ways to Promote His New Book
Former Apple Design Boss Jony Ive: Car Buyers Will Demand The Return of Physical Buttons
The trouble with the rat-race is that even if you win, you’re still a rat. — Lily Tomlin

source

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles