Q&A with Barbara Meyers Part 1: How Early Technologies Revolutionized The Publishing Process
Below is a Q&A I did with long-standing friend and industry stalwart Barbara Meyers Ford. It was inspired by my post on the growth of mentorship and in honor of the great advice and support Barbara has given me over the years, helping me integrate into the community when I was fresh off the boat here in the US.
Readers should find a bit of everything in this post, an historical journey through four decades of the publishing industry, with very astute and relevant observations now, wonderful career advice, aspects of the STM Challenges Diversity blog posts, and finally the genuine character, experiences, and life time of amazing work from one of the industries true gems, something here for young and old alike to aspire to and learn from.
Answers from Barbara Meyers Ford, DBA Meyers Consulting Services, to questions posed by Adrian Stanley, VP Digital Science
Tell me a little about how you got into publishing and a few highlights and special moments from working in scholarly publishing.
Like many of us as youngsters, I did not say “I want to be a publisher” while I was growing up. I had always dreamed of being a scientist and by the time I was in high school the demise of a dear uncle to cancer tipped me towards wanting a career in biochemistry. Starting out as a pre-science major in my freshman year, I was disappointed to learn two things.
- The first was that my university (at that time) did not allow science majors to have a minor. I had hoped to be a scientist who could communicate research findings to the public (little did I know that in 1971 such a concept had not made it into mainstream science). So I wanted journalism as my second focus as an undergraduate.
- The second rude awakening was learning how chauvinistic the field of chemistry could be. Coming from an all-girls Catholic high school, I was ill prepared to deal with my classmates most of whom were 90% Jewish male pre-med majors. Being the only girl in my lab classes was a harbinger for my early to mid-career years.
Switching to journalism with a science minor allowed me the next best thing. I took introductory courses for science majors in most disciplines along with my core journalism curriculum. The high point of my academic career was having Warren Burkett, author of Science Writing for the Mass Media, as a professor and mentor. His encouragement and guidance remain with me to this day.
As an undergraduate, I started working in the D.C. world of non-profit organizations, initially with the National Rehabilitation Association (NRA) as an editorial assistant. This was my first real job in scholarly publishing as I was responsible for following up with the reviewers for the NRA’s journal. When I speak about appropriate technology and use the example of 3X5 index cards in a shoe box to hold a reviewer “database” (by the way, we didn’t use that term back then) I’m recalling my earliest days in publishing.
Skipping ahead after graduate school where I focused on science policy and technology assessment, I was lucky enough to spend time at the NAS Office of Information which drew upon all of my university training. It was 1976 and the NAS like many other DC-organizations had put together a very special exhibit celebrating the bi-centennial. I was involved not just in “translating” research reports for consumption by newsmen with little to no background in science, but because of the timing, in public relations activities as well. My “big coup” was getting the NAS exhibit as a shout-out from all the tour buses.
Answering an ad in the Washington Post I landed a position with Capital Systems Group (CSG), a consulting firm and one of what was known as the “Beltway Bandits.” CSG had a contract with the National Science Foundation (NSF) which was in its second year when I arrived. A loose-leaf publication, “Improving the Dissemination of Scientific and Technical Information: A Practitioner’s Guide to Innovation” provided me the opportunity to research and write about all the amazing things being done in the 1970s as computers were starting to be used in the process of publishing, mostly in the areas of law and science. It was an exciting time for the industry and my work brought me into contact with all the major players, from society publication directors, to presidents of printing and typesetting companies, to engineers working with the newest technologies. Such a broad-based education of the entire publishing process from author to reader received through a one-on-one basis with leaders in the industry can’t be replicated in any academic program or commercial training program. Those of us involved in the Innovation Guide from staff to advisory board members to readers (for several years we were the “blockbuster” for the now-defunct NTIC (National Technical Information Center – the distributor of all government-funded publications) considered ourselves quite lucky to be connected through this very special project.
What are the major changes you have seen throughout your time in the industry?
Wow, that’s a really difficult question because it’s hard to select out just the “major” change events. I came into publishing when it was literally on the cusp of moving from hot metal to cold type. The first instance of text being accessible completely via computer was in the late 1960’s with OBAR which was a database of all the Ohio legal statutes. The next three decades were the era of Lexis-Nexis in the larger arena of business publishing and several ventures into full text by academic publishers both in the society and commercial worlds.
In the late 1970s into the 1980s I found myself a member of the ACS R&D team developing the first online full-text database in the sciences (one of my bit parts in our transitioning from print to screen). By today’s technical standards we were dealing with bear skins and stone knives, but for its day we were cutting edge. That database later became CJO, Chemistry Journals Online.
Our commercial publishing colleagues, after a next decade of their own technology projects, launched larger partnerships through projects such as Red Sage (Bell Labs, Springer, & UCSF Library see http://www.dlib.org/dlib/august95/lucier/08lucier.html circa 1995) and TULIP (The University Licensing Program through which Elsevier joined 10 U.S. universities to focus on the technical and legal aspects of bringing primary literature to the desktop 1991-1995).
Being on the scene for this movement from atoms to bits, working with some of the major players and continuing to keep myself “re-trained” during those decades, gave me a front row seat to several significant changes; some we did well and others could have been done much better
Recognizing what were our real core competencies was a major stumbling block to successful change. One major example was desktop publishing which overall was a failure. It only accomplished one thing and that was to decimate the composition/typesetting industry of the day. In books, we saw the rise of author-prepared, camera-ready copy using templates provided by publishers which resulted in truly rudimentary texts. This approach was most frequently used to get conference proceedings out faster but that’s about all it gave as an added value and a poor one at that.
In journals, there was an equally unsuccessful attempt to have authors (read = administrative assistants or the lowest guy on the research team totem pole) also create copy using templates. This worked fairly well in the humanities where the field’s literature comprises mainly text with little to no tables, graphs, or other non-textual material in need of typesetting. In the sciences and engineering, however, templates needed to leave “windows” for any graphic material to be dropped in later using manual cut-and-paste techniques (think Exacto knives and glue sticks in some cases).
Then in both cases of books and journals the page moved to the next step of photocomposition at its most rudimentary levels … nothing like the pre-flight and other developmental stages which followed before we were able to bring about a more consistent flow of digital content as we have now.
During this transition, publishers learned that perhaps there was a very real need for professionals who knew how to competently perform the challenges facing us in the area of composition. Desktop publishing moved from the beginning to somewhere in the middle of the process. We weren’t trying to have authors use word processing templates or provide us with camera-ready copy any longer. But composition (at least for periodicals) moved farther downstream and you saw the rise of systems such as Xy vision and Quark. You also saw the split between editorial staff using PCs (the first IBM PC was introduced in 1981), and production staff working with Apple iMacs which were much better suited for layout and design. Next came researchers developing their own typesetting system, LaTeX, to handle complex math and associated special characters which still serves as the mainstay for digital publications in the sciences, engineering, mathematics, and the like.
Quick comments on other examples of applying computers to the publishing process from the 1970s onward and the surrounding environment.
- Automated Peer Review = first software developed by Dr. Lorrin Garson at the ACS which spawned an entire new vendor community which continues to innovate throughout the publishing workflow providing process management support heretofore done manually or by Excel spreadsheets and other rather clumsy software programs.
Historical aside: It was very tough to get software developers interested in providing programs for publishers back in the 1970-80s. Compared to the other areas of commerce such as banking we were a backwater and not worth their time. It wasn’t until some smaller enterprises came into play that we see real progress made in this area.
- Electronic Publishing = no longer only focused on editorial and production but having moved squarely into dissemination as an interim step to finally defining the entire flow from author to reader.
- Automation of the publishing process & beyond = Brought about content vs container which only muddied the waters when what we needed to understand was the whole evolution from ink on paper to technology-based generation, distribution, access, and preservation of data/information
Aside: We’re still working on this. Much of what we do continues to just move the formatted printed page to the screen. One major exception to this is the development of the Article of the Future by IJsbrand Jan Aalbersberg, SVP Journal and Content Technology, Elsevier, and his team.
- Copyright = 1976 brought the new Copyright Law which along with dramatic changes in commercial journal pricing created a chasm (perhaps abyss might be a better word) between publishers and libraries. I remember Cliff Lynch commenting at an early Charleston Conference how our problems with copyright would only be resolved through contract law. His words foreshadowed site licenses and other developments in this important area of intellectual property ownership.
- Marketing & Fulfillment = the big initial deal here was having computers handle subscriber lists both current and potential but it wouldn’t be until the late 20th century before professional and scholarly publishers would recognize—as our counterparts in trade and other types of publishing had learned long ago—that automating promotion and retention was the most critical aspect in this piece of the process. We still lag behind our trade publishing colleagues in the use of technology and certainly SoMe (social media) in creating and sustaining the communities based on subject interests.
- Internet/WorldWideWeb = without getting into all the historical sagas relating to these two major innovations the significance in this area is how the WWW totally blindsided publishing. As websites were first being developed I remember attending a 1985 conference in San Francisco called “The Commercial Internet.” Day 1 had a speaker from one of the major computer companies announcing the launch of their website as she spoke. I was the only publishing professional in the room to hear her. It would be a few more years before the publishing community as a whole started to recognize that we had abdicated how the Web would be governed and developed to other industries. Much like we had missed recognizing our place when the first databases were developed and an entire new information industry happened in the vacuum we ignored.
Can you explain the idea behind setting up SSP (the Society for Scholarly Publishing), you were one of the founding organizers correct?
SSP was founded formally, if you will, on June 18, 1978. It was the concept of several people involved in the IEEE Conference on Technical Communications (which met every two years until the mid-1970s), the Advisory Panel and staff of the NSF Project I mentioned earlier, “Improving the Dissemination of Scientific and Technical Information: A Practitioner’s Guide to Innovation,” and a subset of the membership of Computers and the Humanities. The idea was straightforward: those three groups were going out of existence for one reason or another, hence the communication and dialogue enjoyed by their members was going to be lost.
At the final meeting of the Innovation Guide’s Advisory Panel meeting several members raised the issue of how to continue their networking and that started a very serious discussion and the development of plans for SSP.
We hoped that SSP would fill that existing communications gap and more. It was our intent to complement the activities of the Professional and Scholarly Publishing (PSP) Division of AAP and the Council of Science Editors (CSE). PSP was our trade association. SSP was based on individual membership. CSE (the Council of Biology Editors until 1981) was focused on the editorial component of publishing. SSP wanted to broaden people’s perspectives to take in the entire process of information moving from author to reader. Our godfather, Jim Lufkin of Honeywell (and coordinator of the IEEE conferences mentioned earlier), called it “looking over each other’s walls.”