The Dumbing Down Of Programming

Dumbing down anything lets more people participate, and that's only bad if compromises have been made to limit what experts can do.

Isn't this the heart of the CLI/GUI (that's CommandLineInterface vs GraphicalUserInterface) argument? Anyone who can't program will hate CLI if it involves repetitive input that can be replaced by a mouse click. A good programmer will always reduce keystrokes by substitutions which isn't an option for a non-programmer or any GUI that I'm familiar with - I happen to prefer WindowMaker on Linux.

Programmers still seem to prefer GUI over CLI. Name one programmer that still boots to the CLI and doesn't use X Windows or MS Windows... Even if they boot to the CLI, they still use GUI's like emacs or vi.

CLI lets an expert express their wishes in great detail, and even lets the expert generate a GUI for a stable set of wishes which will be repetitively desired.

-- DavidSmead?

If the actual GUI configuration is declarative in nature (and open), then it matters less which tool is used to create it. One can ideally have a visual representation, an XML representation, and a relational representation. How you view and edit it is up to you. Unfortunately, a decent standard does not exist yet.

Programming is hard. It needs to be dumbed down so we can build bigger systems.

Making higher-level languages is the best way found so far. See NthGenerationLanguage.

Higher level languages aren't dumbed down, they improve our ability to logically reason about our programs. Dumbing things down would be using machine language or assembly code because it is much harder to logically reason about machine code or assembly code. I've always considered programmers who still use assembly code to be a bit dumb (there are cases where assembly or machine code is needed, but using it just because you can... is dumb)

I find this statement ignorant and arrogant. Doing assembly well takes experience and practice. It's often required in domains where machine resources are scarce such that one has to squeeze the most out of the least. I don't think any compiler can compete with the best hand-tuners. It's an admirable skill. (Like ski jumping, It's not for me, but still admirable.) -t

As stated, there are cases where assembly or machine code is needed. Writing a web application in assembly code just because you can, would be dumb. The TurboPascal source code was written in assembly and closely guarded by Anders H., and almost no one could work on the source except him, because it was such a closely guarded selfish magical secret of his. Later, he smartened up and borland rewrote the compiler in higher level language, using assembly here and there when justified. A good practice when doing assembly programming is to have a copy of higher level language source code, in addition to the assembly source code - then you can reason about the higher level source code, but optimize the bare assembly.

As stated, it's poorly worded, and appears to attack assembly writers in general. I won't assume malice for now, just hurried writing. Fortunately Ward's Great Wiki allows us to tune or text and improve it.


Connecting thoughts:

What comes above, about programming languages, and what comes below, about graphical vs. command-line interfaces, both stem from an assumption that has so far proven true, but need not be true. First, some definitions that this page could very much use:

1. Lower-level interfaces (in the form of programming language -or- operating system) typically provide tools for performing very simple, but very specific, tasks. In order to accomplish anything substantial, you need to utilize many of them, and thus you need to understand many tools in great detail.

2. Higher-level interfaces typically provide tools that are more powerful, in that they accomplish a wider range of actions. Usually they're automated, so that, given simple (often more intuitive) input, they perform many complex, low-level functions. However, their automation is also their weakness: it's often difficult or impossible to customize the tools to do something different. So at times, it can feel like you're trying to club a fly with a huge rock, when a simple flyswatter would prove so much more effective.

Anyway, the argument in both cases usually boils down to an argument between the "power users" who have spent days/months/years/decades learning the specifics of a low-level interface, and who can accomplish really amazing feats because they're highly skilled; vs. the "average users" who want to spend as little time as possible learning the interface and just use it for everyday tasks with minimum hassle. Neither side can claim a moral high ground, and neither is inherently correct. Any interface must be able to perform very complex, robust actions, but should be simple, intuitive, and easy to pick up for the simple, day-to-day tasks (and users.)

The cause of this argument: In both areas, we started with low-level interfaces but got sick of their drudgery and stupidity; then we developed high-level interfaces but got sick of their inflexibility; and finally, we grafted the two together. So you have high-level C++ code with in-line assembly bits mashed into it. And you have a Windows machine that requires shelling out to DOS to perform any non-trivial function. It simply doesn't work! You don't want to have to perform every single bloody step all of the time - it's monotonous and error-prone. Imagine trying to write Word 2000 completely in assembly! AND, the high-end tools we're stuck with are frustratingly intractable; try performing batch actions on lots of files in Windows, or registering a window in a slightly different way using the high-level Microsoft APIs.

It needn't be this way. The next step in the evolution of both programming languages and operating systems is a correct meshing of the two interface methodologies. There's nothing implicitly wrong with high-level interfaces, but we need tools that are so easily and flexibly customizable that they rival or surpass their low-level-interface equivalents. It REALLY IS that simple.

Wait 30 years. That's my advice. In 30 years, someone will have figured out how to do this effectively, both for operating systems and for programming languages.

See you then.

-- DavidStein?

I agree with everything except the time frame. You have put your opinion OutThere concisely. That will shorten it. -- PeterLynch


This short article is worth reading.

-- FrankGerhardt

The article has two parts. Here are links:

The article disappointed, especially in comparison to Paul Haeberli and Bruce Karsh's FuturistProgramming which also complains of dumbing down without mentioning Microsoft: http://www.sgi.com/grafica/future/index.html. -- WardCunningham


I seem to remember BjarneStroustrup mentioning something in his C++ book to the effect that "taking complexity out of a programming language doesn't make that complexity go away; it merely forces that complexity into the programs that are written in that language." I can't find the exact quote. (It wasn't in the book, it was in a Slashdot interview. See ComplexityHasToGoSomewhere.)


Most of what is below has to do with CLI vs. GUI and isn't about the dumbing down of programming. EditHint: RefactorMe to a new page, say TheDumbingDownOfInterfaces?.


My experience agrees with her assertions. Admittedly, she is a bit on the anti-Microsoft side, though not necessarily in that article. Still, can you blame anyone tired of pretty but buggy software for being against the primary vendor of that type of software? -- EdGrimm

GUI's are very good for many activities, especially for things you don't do very often, but they are not always best for very expert users or for tasks that involve collections - e.g. move all the files that begin with A or F, that include 'Fred' in the middle, and that are text files. -- SteveFreeman

GUIs aren't good at certain things - but that isn't necessarily an inherent limitation of GUIs. It happens because the real experts are frustrated by GUIs, so they go back and use their usual tool to accomplish their tasks, while the people left using the GUI don't have as much of a need for fancy tools, or are conditioned to accept what's available. More research should be done on how to make GUIs more powerful. Drag-n-drop could be used to construct a UNIX-style pipe. To find all files matching a regular expression, one might right-click on the directory, click on "RE search", and type in a regular expression. That's only a part-GUI method. Or, "RE search" might bring up a dialog which lets a user connect pieces in ways that symbolize concatenation or alternation, etc. In this way, the user wouldn't have to go back to the shell prompt and cd to the appropriate directory. (Also, perhaps you could drag the directory icon to the shell, and it would automatically change to that directory.) -- anon


So what do you do if there is no dialog box for what you want to do? -- wpociengel

I installed Linux many times, SlackwareLinux and RedHat and never wasted more than half an hour, including kernel compilation to make it smaller and faster. It always did the job I wanted. -- PatricIonescu (patric@crinsoft.ro)

What distresses me about Microsoft is the downward trend in quality and usability. Yes, usability: it's two steps forward and three steps back. Some of the "improvements" have a dark side. -- KielHodges

(All improvements have a dark side!)

I remember Microsoft being slated by the popular computing press over the difficulty of installing windows 3.0. These problems were bypassed by simply getting OEM PC manufacturers and distributors to ship windows "ready-to-run". There's no reason why Linux can't be shipped "ready-to-run". It's not even a technical issue - if the political and economic motivation is there it'll happen.

My second linux machine was built with the policy "if it's not on the compatibility list, I'm not buying it". This is much easier than struggling with some anonymous built-in graphics card or SCSI controller. --

There are companies that pre-install Linux these days; just none of the major OEMs, so far as I'm aware. I seem to recall there being a list of companies that ship systems with linux installed somewhere on http://www.linux.org. -- EdGrimm

I don't know how old this comment is but as of May, 2K Dell and IBM are shipping PCs with Linux pre-installed. Dell I know is shipping RedHat; I think IBM also. -- StevenNewton

If it's the case that software is hard to install on Linux, then I suppose I could make the case that on a Mac it is the easiest of all. And therefore we should all be using Mac's. *nix is probably one of the easiest OS's to install s/w on as it tends to put everything it needs into one directory, it's (the apps) own. as opposed to which flavor of comctl32.dll will this new app want to use. Or if we go back in history a bit which flavor of winsock.dll will be the correct one. My preference is to spend the disk space and keep all of an apps files together. Uninstall consists of altering some environment variables and deleting the program directory (but I digress).

Also I can't begin to count the number of 'admins' (email, os, whatever) who had NO idea how a system worked and were totally lost if there was not some menu option to configure the item they wanted to configure. (networking comes to mind) And I can agree that there can be some good to having menus and dialogs for the more esoteric options; but I disagree when those same controls are used to limit the options available.

Further while I can agree that the ordinary user doesn't need access to most of these 'advanced' options (if you agree with this then you might want to examine your position on the Mac os) the current trend seems to be to remove access to anything but the very basic system controls. -- WilliamPociengel

One of the things that I love about the admin 'gui' in IBM AIX (smit/smitty) is that for every single activity that it performs, you can choose to either execute it, or see the command it is about to run. It guides you through with simple prompts on how to do things, and then tells you the commands that you need to run if you want to do it again without having to get the gui involved. This inevitably leads to the ability to say "Well, I'd like to be able to do blah, which is almost the same as foo, which I can do through smit", and then go off and do what you want. It's ok to dumb programming or anything down, as long as you still leave open the door which leads to "How does that work?". -- DanielSheppard


"Unix is user-friendly; it's very user-friendly. It's just picky about who its friends are."

IMHO, the simple answer to the X Windows problem is to, um, not run it. SuSE gives me a nice GUI to do my admin stuff with, though there are things it doesn't cover yet. I'll predict that by this time next year, there will be usable GUIs that cover virtually all admin stuff in both Xwindows (RedHat is almost there), text consoles (SuSE is getting close). There may even be some products that are already there, that I don't know about. That happens a lot with Linux.


[An avid Windows user says that they're more productive with a GUI than with a command line, and that there are studies to support this assertion.]

What studies? I think it is right to refuse to be constrained to ONLY a CLI or a GUI, but "I don't do command lines" does not seem to be very open minded. Being a power user usually means you use the right tool for the right job. Often, a perl script & command line tools are what you need to do certain tasks. Even for many NT admins.

I suspect that those who are more productive in CLIs are those who type very well, and those who are more productive in GUIs are those who type less well; for them, clicking and pointing is faster than typing. For a very good typist, clicking and pointing can be much, much slower (and less expressive). Of course, it depends upon what you're doing. I'd hate to draw using a CLI just as much as I hate to compile using a GUI. -- WayneConrad

Well actually I can't type worth a s^%& but I REALLY hate having to keep reaching for the durned mouse just to make a single click between keystrokes. I do agree that some actions are beter suited to keystrokes and others to a tablet (yes I think a tablet is much better in some respects). -- wpociengel

To each his own, I guess. I'm most productive on X Windows with a NextStep-clone interface and a lot of XTerm prompts around me... but, then again, I feel pretty productive on my Mac too. -- StuCharlton

10 digits vs a single hand/arm. It's unarguable which input method is superior. -- clayne

I use both GUI and command line interfaces, and find that neither is a complete solution by itself:

I've even been known to send files from Windows to Unix, just so I could use the Unix tools and vi/vim to work on them - before returning the data to Windows. -- JeffGrigg


The irony of this discussion taking place on a page called "TheDumbingDownOfProgramming" is, for me, inescapable.

I assert that part of being not-dumb is being aware of what has been done before me, and being willing to understand and comprehend why it was done. I find it astonishing and disheartening that at the end of 1999, practitioners within our industry are still debating the GUI-vs-commandline "issue". I assert that this is comparable to the still-raging (within its community) digital-vs-analog debate among audiophiles.

As I observed in my comments on EmacsVsVi, this issue is amenable to rigorous study, it has been studied, the results of that investigation were not ambiguous, and the "problem" was largely solved in the Macintosh user interface of the middle 1980's. Apple and Xerox made substantial contributions to the human factors literature of that time. This ground was also plowed in SIGGRAPH, when the technology was considered ground-breaking and innovative.

There is no technical reason why a gesture-driven user-interface needs to exclude command-line input, and there is no technical reason why a command-line interface needs to exclude a GUI (this has been demonstrated in OberonOperatingSystem; the book TheHumaneInterface discusses various approaches to this in detail). There are good and well documented attributes that either must follow to optimize "usability" (another well-defined and easily measured quantity). For example, the "noun-verb" ordering is significantly and compelling less complex than its inverse.

AMEN! And well spoken. -- wpociengel

The vigorous protestations of the unix community in defense of the shell-de-jeur (CSH, KSH, BSH, etc), like the similar arguments from the linux community, are reminiscent to me of the arguments from the vacuum-tube-and-vinyl community that "digital must always sound cold and empty". Vacuum tubes and vinyl impose specific, predictable, and measureable effects on audio. Digital systems extend those limits well beyond the range of human hearing. Yes, the two sound different. But...if the goal is to reproduce the sound of the original source as faithfully as possible, then there is no question that digital systems are superior. It is perfectly reasonable to say "but I prefer the sound of my tube amp and my vinyl". In a similar way, it is perfectly reasonable to say "I prefer the CSH environment under 4.2BSD".

ermmm, sorry to say this but vinyl contains significantly more information than CDs. Tubes are faster than transistors. Only now are digital systems exceeding vinyl/tube resolution. For me GUI's are getting better all the time. Tooltips, good design and my new handy trackball have made the difference. My new hifi (oh please please please) will be all digital 24/96/6.1 and finally dust those weird mechanical apparati. I suppose my point is that GUIs are getting better all the time, but the old-timers still have a few valuable tricks. -- RichardHenderson.

[Discussion of keyboard and mouse/menu usage moved to PointerAndKeyboard.]

Interestingly, one of my favorite GUI behaviors has been popping up more and more in CLIs: poking an object to see what you can do with it. In a GUI, this is usually done by right-clicking it (or control-clicking it, or slow-clicking it), which results in a pop-up menu showing (most of) the things you can do to that object.

In the Unix CLI, more and more commands are supporting the equivalent of this: the --help flag. If I want to know more about some program, I can just invoke it with --help, and it'll usually give me guidance in how to use it, and what the major options are.

Oh, but they are backwards with respect to each other. In a typical Windows GUI you ask Readme.txt (the object) what can be done to it (the verbs), and the system answers that (because Readme.txt is .txt is txtfile is ?Text Document?) it can be Opened or Printed, and the system knows that both verbs are done with notepad.exe (the subject). The complete sentence looks like: Readme is printed, and I don?t care how. In a CLI, you ask the subject about the verbs it can do and tell it the object: notepad.exe /p Readme.txt, and the sentence looks like: Notepad prints Readme.


One of the universal problems in discussing any such issue is to nail down what the topic of conversation is. That is, if it's a general topic, like "What do you think is dumb about modern tech?" then pretty much anything goes. If the topic is "How can the issues brought up in Ellen's article be addressed?" then one can usefully determine on topic vs. off.

May I propose the topic of "How can one improve the productivity of programmers without simply hiding complexity?" Technologies such as the STL (Standard Template Library) for C++ are moving in the right direction IMHO, but are there proposals for any more high level library? Such as a Patterns library for example... Cheers! -- IvoRoper

A pattern library won't help... each design requires a different selection of patterns, that are combined and implemented in different ways from any other design. (see PatternChaos)

In defiance of what I also would have said to be impossible, Andrei Alexandrescu has done this with his Loki library. The book which is the best documentation for it lives at http://www.aw.com/catalog/academic/product/1,4096,0201704315,00.html, and I thought that was where the code download lived too, but I can't find it, sorry. I'd seriously recommend it to any C++ programmers interested in "improving productivity while hiding complexity".


http://freshmeat.net/projects/lokilibrary/?topic_id=809 seems to be able to track the LokiLibrary? pretty well.


What about Ellen's idea of using the wizard with full knowledge of what the wiz is up to, using wizards while maintaining the capability of opening the cover to see for yourself what is going on. Has any work been done on developing a system of programming that supports something like this? How about OpenWizardry or OpenWizardAssistants? -- PatCallahan

A simple step toward it would be a wizard that builds a script based on your responses, then shows you the script before (or instead of) executing it. A lot of makefiles have a "make config" option which essentially does this (though it's more like waterskiing than hand-holding).

One of the good things about GnuEmacs is that, right after you've done something, you can see the elisp code that would do the same thing. That mentally links the language-like representation of the commands with the gesture-like representation.


Well, this has nothing to do with CLI/GUI, but TheDumbingDownOfProgramming certainly relates to the discussion in ThreadsConsideredHarmful.


One way to make CLI's easier for newbies would be to provide auto-completion of command-line arguments. Also, having a tooltip with the command's usage displayed would be invaluable. Remembering the options for a command, and the proper ordering of the arguments is one of the hardest parts of using a CLI.

Has anyone seen a CLI that provides tooltips or auto-completion? -- EricRunquist

Tooltips no. Autocompletion shouldn't be hard. Just so the part that is being guessed in reverse colors, just like the GUI does.

The GRUB (GrandUnifiedBootloader) shell lets you autocomplete device names and kernel images. If there was such an extension to the unix shell too, it would be great.

Bash now has insane possibilities for providing autocompletion. The Debian installation of bash, once completion is enabled, completes most enumerated arguments for all standard GNU utilities, knows approximately all command line arguments for most shell utilities, knows everything about Debian-specific programs and can complete package names in apt and dpkg command lines. You don't need Debian, though - debian merely has a well-rounded bash completion file by default. -- TorneWuff

Command line history and autocompletion is abstracted in the GNU readline library, and so can be easily added to any command line application.


Okay, here is my $0.02 worth. First, concerning TheDumbingDownOfProgramming, I remember some useful comments from LeoBrodie in ThinkingForth (I think). He suggests that the key to managing levels of complexity and abstraction is removing any arbitrary distinctions between them. For example, he points out that low-level assembly-like code and high-level problem-domain code look the same in the ForthLanguage. They are not arbitrarily different in syntax, structure, complexity, etc. They differ only in semantics (one manipulates the processor and the other manipulates problem-domain objects). I think this is crucial. We need to isolate and remove artificial complexity. And I agree with the BjarneStroustrup quote above regarding the language's participation in managing complexity. However, I believe that ultimately, programming will always be a challenging and very human activity. As I have said, in effect, elsewhere: ProgrammingIsInTheMind.

Second, concerning CLIs, I found AmigaDOS to be the most compelling. It combines all the best aspects of Unix and DOS CLIs, with some additional unique characteristics. It uses and encourages readable (but brief) names for commands and options. It provides conventions for handling arguments, and especially options, that dramatically increase ease-of-use. It provides a complete scripting environment that is so well integrated that scripts are indistinguishable from built-in commands - all the supporting functionality for built-in commands become instantly and fully available for a script, and a script can be used everywhere and in every way that a built-in command can be. Writing scripts for AmigaDOS is a joy, and the results are usable, readable, and maintainable.

Furthermore, AmigaDOS provides an elegant means of managing program configuration. Each "shortcut" icon in the GUI can invoke a program, optionally with a document file. Nothing new there. But each icon can be easily configured to create a custom program "environment" using simple name/value pairs similar to Java property files and Windows INI files. There are no centralized registries or configuration databases, yet it is immensely usable.

I regret that the Amiga platform has essentially died, and my two Amigas sit waiting for the next garage sale. But I seriously hesitate to part with them, and mostly because of the elegance of the CLI/GUI. I have yet to find anything that matches them.

Overall, I think that TheDumbingDownOfProgramming is only a symptom of something more - TheDumbingDownOfHumanity?. My observation is that humanity has forgotten many of the basics while it has embraced technology, specialization, and other aspects of modern life. For example, it seems that most of our peers are content to learn one language or database or OS, etc. Many will go on to learn a few other tools. Some will go beyond tools (the what) to explicitly embrace technique (the how and why). But only a few seem to pursue the tools, techniques, and beyond, with a passion and intellect that seems characteristic of our ancestors. Most of our peers today seem uninterested and unprepared to make that endeavor. Moreover, this does not seem limited to our industry. It seems that, in every measurable and detectable way, humanity is shrinking back from the challenge to think, study, know, and act. There are numerous counterexamples, such as what I find here on the WikiWeb, but that seems to be the trend. For example, it seems that people today are much less prepared to find the faults in any line of reasoning. They are easily swayed by meaningless rhetoric, and do not seem to see any need to question it. They cannot think critically. They do not bother to know themselves - who they are, what they know, what they do not know, why they do what they do. It is possible that my observations may reflect my isolated circumstances, some wish to venerate history, or some other fallacy. Nevertheless, it seems to explain much of what I see. It is not my intent to be condescending or overly negative, so I apologize if it seems that I am.

-- RobWilliams

Sounds like "TheFeelingOfPower" by IsaacAsimov.


Now the for the alternative view. Simplification is not "dumbing down." It takes very intelligent people to simplify things. It took a lot of intelligent people and time to go from manual tasks to dedicated hardware for tasks to programmable hardware to software to 1st, 2nd, 3rd, 4th generation programming languages, etc. It has taken a lot of intelligent people to move computers from the realm of the scientist to where 5 year olds and grandmothers can surf the web. Simplification is hard work and we should take pride in making things simpler and not refer to it as "dumbing down."

Tell that to all the five-year-olds and grandmothers who could surf the web, if they could figure out how to make that damn $1000 "black box" actually go. You have shared a nice sentiment, but part of the point is that it predominantly has not actually been realized. Even surfing the web is a significant undertaking for most people. And when it breaks, the "black box" bubble bursts. There are very, very few real examples of the kind of simplification that you describe.


Exposure to J2EE and its related vendor tools has the subtle effect of changing you from a SoftwareEngineer into a ConfigurationEngineer?. It's subtle because it takes a while before you realize that you're not using your brainpower for programming anymore, you're now using it to second guess the tools when they don't work (and that's too often to ignore). Twenty minutes to figure out how to add the archived libraries to your project. Three days to figure out how to use Tool A to deploy a web application to server B. What's wrong with that web.xml file? It's written to the spec, but the server won't implement the servlet mapping I specified. Eventually, the fix is found. Now back to programming. Let's see, how can we make the data in these SQL tables look like objects? Write reams of JavaBeans setters and getters and some glue to copy from the result set into the beans. (Thank god for tools to help with that.) Now we're programming, aren't we? Sure thing. Remember to include a dozen or so empty but required methods in that EJB. Clickety-clack. The keyboard is humming and my work is getting done. But what happened to the days when I used to solve real problems?

If this is the case then I would suggest changing your tools. This is a clear case of SharpenTheSaw. Switch to reliable, simple (preferrably open source) tools and investigate to see if there are better ways of achieving your goals. For example you should have an abstract class containing those empty but required methods and your bean implementation classes should just extend this AdapterClass?.-- AdewaleOshineye

Isn't the dumbing down of programming an actual aim of the Java programme? Wasn't Java meant to be a language in which it would be safe for not-very-good programmers to write small "utility" programs for widespread distribution? These days I work mainly in the world of hand-held devices, where Java is (now) a reasonably good fit for thin-ish clients and other small apps, but I did some EJB stuff back when that was new, and still remember being stunned at the idea of writing large, mission critical server code in a language whose standard collections, for example, don't suport internal iteration, whose standard libraries more generally would have one believe that arrays are a jim-dandy data structure for almost everything (thus, ironically, making life much harder for the not-very-good programmers). It's a strange world. -- KeithBraithwaite

I don't think we have dumbed down programming as much as we have shifted the primary cognitive makeup for programming from logical reasoning to inductive reasoning. Whenever you are writing in C or one of the older languages you are doing logical reasoning because you are basically putting lots of simple pieces together to make a more complex whole. When you are doing object oriented programming like with C++ or Java you are studying large amounts of complex building blocks to find out the right way to use each. Because you have to wade through masses of data to understand what you are looking at, there has just been a shift in cognitive focus for programmers to inductive reasoning. Things have neither been dumbed down nor made more difficult, things are just different. No more putting simple pieces together. I'm great at logical reasoning, wish I were better at inductive reasoning. -- JonGrover


So I write this stuff that uses XSLT to process XML, which is the RightThing IMO. One of the other guys needs to tinker with it. "To tinker with this", I tell him, "you will need to understand the XSLT spec at this url and the XPath spec at this url". His response? "I don't want to read that! Just tell me how to edit the file!". Sigh. And this guy is not useless, he's a productive coder.

We can all guess what the next exchange will be:


See also: CoordinateVersusNestedGui, IgnoranceDrivenDevelopment, GreatLispWar


EditText of this page (last edited July 10, 2014) or FindPage with title or text search