Designing the Smart City: A Programmatic Approach to Inclusive Innovation in Atlanta [External]

metrolabgraphic2

This week, CUI Director Dr. Jennifer Clark had a post published in the Atlanta Studies blog. Atlanta Studies is “is an open access, digital publication of the Atlanta Studies Network,” which includes students, instructors, and researchers from Emory University, Georgia State University, Georgia Institute of Technology, Clark Atlanta University, Kennesaw State University, the Atlanta History Center, and the New Georgia Encyclopedia. Dr. Clark’s post, “Designing the Smart City: A Programmatic Approach to Inclusive Innovation in Atlanta,” discusses the emerging range of smart city interventions, and uses several examples from Atlanta, including the MetroLab Network and North Avenue Smart Corridor, and its role in the smart cities story to explain these important larger developments.

Smart cities are about sustainable economic development in the future – not just autonomous vehicles like the one that took a test drive on the North Avenue Corridor on September 14 – and that requires a programmatic approach to technological change, not just one, discrete technology project. In my forthcoming book on smart cities, Making Smart Cities: Innovation and the Production of New Urban Knowledge (Columbia University Press) I analyze smart cities from several vantage points. First, I discuss smart cities as a set of technocratic solutions to urban policy challenges – projects, programs, and products. Second, I describe how smart cities operate as emerging markets for new technologies. Third, I describe how smart cities are developing as a new form of urban entrepreneurship focused on marketing cities in a competitive global economy. Fourth, I explain how smart cities act as a mechanism for exacerbating uneven development. Fifth, I illustrate how smart cities are developed through distributed networks for innovative governance. And finally, I analyze the potential of smart cities as a means for increased civic engagement and open innovation. I argue that technology development is the easy part; it is the design for and deployment into this increasingly liminal space – twenty-first century US cities – where governance, regulation, access, participation, and representation are all complex and highly localized, that is the real challenge.
Atlanta is a key example of this challenge and underscores the importance of partnerships in the design and deployment of smart cities programs and policies.

To read the article in full, please visit its Atlanta Studies page.

To learn more about the North Avenue Smart Corridor, see the video below.

Advertisements

Wrangling Legacy Data: Preparing for Sociotechnical Change in the Smart City

by Thomas Lodato and Jennifer Clark

Center for Urban Innovation

From connected traffic signals and self-reporting trashcans to automated mobility vans and apps for reporting potholes, smart cities promise to make urban areas more efficient, increase the capacity and options of government and public services, and drive decision-making. These visions are predicated on the use of various advanced (and often computational) technologies that can reveal insights, inform decision-makers and citizens, predict outcomes, and automate processes. Undergirding these technologies—and their insights and efficiencies—are means to produce, circulate, and use data. These data are explicitly captured by sensors and devices, as well as are produced a byproduct or as “exhaust” of various systems. Through analysis and application, data fuel smart cities by attuning systems and people to macro- and micro-processes previously too difficult or invisible to act upon.

Given the contested state of smart city data, we embarked on a project to understand what challenges and barriers exist to making legacy data machine-processable in the smart city. Rather than account for technical barriers in isolation, we engaged in constructive design research (a practice-based methodology) to understand the cascade of dependencies embedded within data wrangling. Focused on budget data in the City of Atlanta—a prime example of legacy data ripe for transformation into machine-processable OGD (open government data)—we wrangled these data into structured data files. In particular, we created Google spreadsheets of budget data from 1996 to 2017 able to be exported to various structured formats (.XSLX, .CSV, .TSV). Motivating the production of these data files was the comparison of budgeted revenues and expenses to audited (“actual”) revenues and expenses.

This project was motivated by the following research question: what assumptions, barriers, and challenges exist within the sociotechnical practice of wrangling legacy data in the smart city? Put differently: what does data wrangling mean in the context of the smart city beyond the technical extraction, manipulation, and transformation of data? Through this practiced-based methodology, we learned that machine-processable OGD depend on an array of coordinated features in the smart city landscape, from where data is hosted and how it is documented to the embedded values of opening data and the tacit domain knowledge of data production.

In this blogpost, we give an overview of the insights from the legacy data project. A more detailed analysis will be available in a forthcoming whitepaper on the project.

The basic question driving our research was, with data so vital to the smart city project, why are data so scarce?

Where there seems to be a glut of proprietary, closed data—that is, data that some entity has exclusive control over (e.g. data that can be sold, and access and use can be restricted; see Tauberer 2014 for more)—other types of data more rare. In particular, the data that seems to be lacking are machine-processable open government data. Open government data—or OGD—refers to “non-privacy-restricted and non-confidential data which is [sic] produced with public money and made available without any restrictions on its use or distribution.” As such, OGD are data that are produced by and about governments, citizens, and businesses. Machine-processable data are data “reasonably structured to allow automated processing.” The Sunlight Foundation’s Open Data Policy Guidelines explain that machine-processable data are “[o]ne step beyond machine-readable data” by existing in “a format intended to ease machine searching and sorting processes.” As such, machine-processable OGD are one of type of data that are vital to understanding public processes, transactions, and affairs. Without these data, a host of potential insights and avenues promised by the smart city are non-starters.

One major challenge for the machine processability of OGD is the speed and character of technological change. Simply put, machine processability is not a static state. As anyone who has ever moved between two different computers knows, the ability to use files of varying formats depends on an entire system. Governments are slow to overhaul information technology (IT) systems and smart city technologies (platforms, standards, systems) seemingly change from day-to-day. The disparity in the pace of change means that governments adhere to a configuration of aging, obsolete, or outdated protocols, procedures, and infrastructures of data production, circulation, and use. We define this configuration as a legacy system. As such, these legacy systems produce data—namely, legacy data—that fail to be easily integrated into new smart city systems, and therefore are not machine-processable, even if they are open-access or publicly available.

In an effort to catalog the state of OGD in the United States, the Sunlight Foundation launched the US Open Data Census in 2014 in partnership Code for America and Open Knowledge International. Through hackathon-like events as well as ongoing assessment, the US Open Data Census provides resources to evaluate the availability of city-level OGD based on nine criteria: whether data exists, is openly licensed, freely available, machine-readable, available in bulk, regularly updated, publicly available, online, and in a digital form. The results highlight the scale of the challenge facing machine-processable OGD. Of the 1111 currently identified datasets across 101 US cities, only 627 (56%) are machine-readable. Similarly, only 601 (54%) datasets are downloadable in bulk. Even fewer—only 552 (50%)—are both machine-readable and downloadable in bulk. Though not equivalent to machine processability, the US Open Data Census reports that only 37% of public datasets in the US are, in fact, open. Ultimately, the machine processability of these data depends on the fulfillment of most, if not all, of these criteria. Even more, the percentage of total datasets that are actually machine-processable is likely to be even lower than the US Open Data Census might indicate because these percentages are based on cities that have local volunteers willing to sort through the available public data. As such, machine-processable OGD are far less common than one might assume.

One way to make OGD machine-processable is to migrate a legacy system to a newer system. Yet where new solutions that expand a city’s capacity are met with enthusiastic support, upgrading existing systems often are not. The cost of upgrading IT tends to be difficult to justify. Existing legacy systems often still work for their given purpose, even if these systems are on the verge of obsolescence. Even more, migrating a legacy system remedies only a part of the problem. The other part is transforming the data themselves to be integrated into an entirely new set of protocols, procedures, and infrastructures. This process is referred to as data wrangling (alternately called data munging), or the process of making data useful for a given end. In the smart city, usefulness means machine processability. Therefore, the goal is to transform OGD into appropriate structured formats. Although these two processes—migrating systems and wrangling data—are related, they are distinct, each of which presents challenges.

But legacy data are more than old files. Changing IT requires retraining personnel, restructuring administrative procedures, and reformatting what data are collected, stored, and accessed. In this way, governments (and many organizations) are slow—technological change implies changes in long-established and deeply entrenched administrative practices, usage expectations, and hosts of other social factors. As such, the challenge of legacy data for smart cities precedes computerized systems and extend beyond the immediate reach of such systems. In short, making legacy data into machine-processable OGD require more than a technical fix.

In this blogpost, we give an overview of the insights from the legacy data project. A more detailed analysis will be available in a above-mentioned forthcoming whitepaper on the project.

Insight 1: Domain Knowledge is Important to Data Wrangling

Although budgets are orderly documents, as a dataset, they are complicated. In part, this complexity comes from subtle yet important distinctions about what a budget actually is. In order to explain what we mean we need to explain how budget documents are made.

../images/_white-paper/budgeting-spendin-detailed-01.png

Figure 1: Diagram of budget process

The City of Atlanta budget process follows the production of three primary documents. The first document is the proposed budget, which is created by May in preparation for the upcoming fiscal year (July to June). This document is created by the Office of the Mayor, and takes into consideration the priorities submitted by Atlanta City Council’s Finance/Executive Committee in a nonbinding resolution. The second document is the adopted budget. The adopted budget is created out of the debates and negotiations of full City Council with regards to the proposed budget. By the end of June, the City Council must adopt the budget. The third document is the comprehensive annual financial report (CAFR). This document is created at the close of the fiscal year and reports the audited expenses and revenues of Atlanta’s city government. The CAFR is then used to help set priorities in the subsequent fiscal year. (See Figure 1 for a diagram of the process.)

In many ways, the process of producing these three documents—the proposed budget, the adopted budget, and the CAFR—are split. The first two documents are documents that look forward; the latter document looks backward. As such, the proposed and adopted budgets are truly budgeting documents in that they estimate expenses and revenues. In other words, budgeting is a type of planning. The CAFR, on the other hand, is an auditing document in that it is an inspection of the accounts and activities. Between planning and auditing, commitments are implemented as revenue is generated and expenses are deducted over the course of a fiscal year (and all that can happen in any given year). Although the CAFR provides data on the actual expenses and revenue, these data are not, say, higher fidelity than proposed or adopted budget data, but instead refer to different processes altogether. That is to say, the data enclosed in these different documents have continuity, but are not the same. The proposed and adopted budget data are data about projections, and so constitute promises made by civil servants and elected representatives to the public (the Mayor and City Council in the proposed and adopted budgets, respectively). The CAFR, on the other hand, is an account of whether and to what degree these projections and promises were met. Without marking this distinction, one could confuse value of these different documents and the usefulness of the different data.

In order to make budget and audit data useful, one must first understand what types of evidence these data might produce. Where machine processability is the default technical goal of wrangling OGD within the smart city, machine processability must be coupled with data that are meaningful and understandable to those seeking new insights. Before engaging in any process of wrangling, what is being wrangled must be understood. In the case of budget data, the production of these three documents impacts what types of insights one might gain. This is but one instance where domain knowledge became important to the data wrangling process as we will see in the next section. Other instances—such as the shift from a January-to-December fiscal year to a July-to-June in 2007 (see below)—impacted what and how we collected, extracted, normalized, and documented the data.

Insight 2: Wrangling begins with collection

In order to transform data into a machine-processable format, you must first have data. As such, our initial step in wrangling data was collecting files to establish the data corpus. The data corpus constitutes the known extent of the data relevant to a particular topic (here, Atlanta’s revenues and expenses). But collection is ultimately driven by what we are interested in understanding through data. In this way, collection is always oriented toward a question answerable with data. Motivating our project was a comparative question: how well do budgeted revenues and expenses compare with audited (actual) revenues and expenses? From this question, the data corpus becomes actionable.

../images/_white-paper/budget-spending-data-sources-01.png

Figure 2: Sources of budget data

Managed by the Department of Finance, expense and revenue data are released through two channels (see figure 2). The first channel is the Department of Finance webpages located on the City of Atlanta website. On these webpages, budget data are available in portable document formats (PDFs). The data are released in different discrete documents that must be downloaded individually. Proposed budgets and adopted budgets are released on the Budget and Fiscal Policy subpage. Currently, these documents can be downloaded dating from 2010-2018. Also available on this subpage are monthly financial reports from fiscal year 2010 to December of fiscal year 2017, and five-year financial planning documents from fiscal year 2011 to 2018. Adopted budgets (“budget books”) dating back to 1996 can be downloaded on the Historical Budget Documents sub-subpage. Auditing documents are found on the Controller subpage. This subpage contains both digest documents of the City’s performance (Popular Annual Financial Reports or PAFRs) from 2012 to 2016, and the more detailed CAFR documents from 2002 to 2016.

The second channel for expense and revenue data is the Atlanta Budget Explorer (ABE) website, a visualization of expense and revenue data hosted by the City of Atlanta and built with Tableau. Conceived during the 2nd annual Govathon in 2013, the ABE is designed to show Atlantans how and where Atlanta city government spends and generates money. The ABE provides information on four of the City’s major funds: General Fund, Trust and Pension, Enterprise Fund, and Service Revenue Fund. The underlying data on the site are primarily derived from the CAFR. Currently “actual” revenue and expenses are available for 2012 to 2016. Expected revenue and expenses—i.e. budget data—are derived from the adopted budgets for 2017 and 2018. A collated dataset is downloadable from within the ABE as a Tableau worksheet, PDF, Excel file (XLS), CSV, or image (PNG of a visualization). The available data files are individually downloadable based on the particular visualization of funds and departments, and are not able to be bulk downloaded on this or any other site.

As already mentioned, confusing budget data and auditing data presents a problem from the perspective of the kinds of questions one can answer with data. In particular, the ABE presents data that seems to answer the question “Where has money actually gone/come from?” The question that motivates our research is “How well does the City of Atlanta budget?” This second question requires the comparison of budget data and auditing data.

As for the process of wrangling, collection reveals the extent of the task and where to focus efforts. With the data from the ABE being primarily auditing data, we realized the data corpus greatly exceeded the existing machine-processable OGD found within the ABE. Instead, the various PDF documents adopted and proposed budgets housed the data we were after. Given the absence of these data in machine-processable formats, the PDF files became the primary subset of the data corpus and clearly defined our next steps in extracting, schematizing, and documenting the data. Additionally, with the question of comparing budgeted values for expenses and revenues with their audited counterparts, we focused on the adopted budgets rather than proposed budgets because adopted budgets represent the combined priorities of both the Mayor and City Council. (Again, domain knowledge matters!)

To reiterate, data wrangling means making data useful, and being useful is dependent on the context of use. Where collection seems exterior to the manipulation and transformation processes defined by data wrangling, collection is vital to establishing the context in which a particular question is answerable through data.

Insight 3: Extraction requires synthesis (and/or why automation may not help)

A primary task within legacy data wrangling is extracting data. Extraction entails pulling data from one file into another file for the purposes of further cleaning, ordering, and (re)formatting. In the context of legacy data generally—and specifically with our project—extraction can be a time-consuming, manual process. In terms of time spent, extraction dominated our project work. Where automation may help, it can also compromise data quality and obfuscate telling idiosyncrasies within the data.

Synthesis, on the other hand, is the process of creating data, either through calculations or other manipulations performed on data. In many ways, extraction and synthesis seem to be opposing processes. Extraction being rote translation from one file to another and synthesis being active creation and manipulation. Yet, as we found, extraction and synthesis are sometimes simultaneous in order to produce a data file that are meaningful and complete.

Targeting only adopted budgets, we began to comb through these documents dating from 1996 to 2017. The first challenge was that the quality of the PDFs changes dramatically across the corpus. Newer PDFs were created digitally and so were already searchable. This feature allowed us to easily locate specific funds and line items. Older PDFs, however, were scanned paper documents, and therefore not immediately searchable. Rather than look through these documents completely manually—meaning, visually line-by-line—we performed optical text recognition (OCR). Due to the font and visual artifacts produced from the original scan, OCR was only partially successful. As such, searching older adopted budgets requires us to perform a second manual pass to confirm no data were missed.

../../../Desktop/Screen%20Shot%202017-08-22%20at%201.35.20%20PM.png

Figure 3: An error in the 2003 adopted budget

The next challenge was that adopted budget documents are created by humans and therefore contain errors. For example, in the 2003 adopted budget, two different values were used for the total value of the 2002 Proprietary Funds—$2,768,172,365 on page 35 and $3,740,664,687 on page 112 (see figure 3). Upon checking the value against other documents, the latter value appears to be a typo. The question is how should we account for this discrepancy in our data file? For this particular cell we recorded the verified value ($2,768,172,365 from page 35) but also produced cell-level metadata that cites the page number of the source and notes the error in the original document. This strategy was extended to all cells in our data set to account for our own potential for producing human error through data entry and allow for others to inspect our process. In this way, the extracted data are accompanied with a map of how the data was extracted in the form of transformational metadata in order to inform users of the data about why a specific value is listed. Here extraction itself synthesized (meta)data.

A third challenge for extraction—and one that also impacted and stemmed from schematization and normalization (see the next section)—was how funds changed over time. As such, certain summary values of particular funds were not always listed in the document. In some instances, these values needed to be calculated, such as the change (in percentage) from one year to the next of a particular fund. In other instances, funds that did not exist or no longer existed required a designation that distinguished amongst non-numeric entries. We created a system to distinguish amongst funds that had no mention in a particular document (marked with “NM”), values that were pending (e.g. audited values for future years [marked with “FY”], or documents yet-to-be-reviewed [marked with “TBD”]), and values required calculation (e.g. summations of funds; marked with “CALC”). Here again, extraction requires synthesis as this classification scheme distinguished cells with a zero (i.e. a listed value of zero) from empty values.

Reflecting on these challenges reveals that automating extraction, although certainly time saving, may jeopardize data quality both in terms of the veracity and the verifiability of the data. In the first case, automation does not account for errors in the data; in the second case, automation does not track the origins of extracted data. In both cases, poor data quality may undermine claims made with data, and compromise the usefulness of OGD.

Insight 4: Schematization and Normalization Are Iterative

Schematization refers to the creation of an organizational structure for data, both in terms of architecture (how data are ordered and arranged as a set) and entry (how an individual datum is recorded). Normalization refers to the standardization of data in light of inevitable variations across different schemata. These two processes create data such that they can be processed in systematized ways, whether that means being joined with other data sets, algorithmically analyzed, or some other machine process. We have already touched on normalization in the previous section with regards to distinguishing between types of empty cells.

Although these processes are central to all data, they are especially important to legacy data wrangling. Legacy data are defined by a change in the protocols, procedures, and infrastructures of data production, circulation, and use. These changes often—if not always—entail changes to the architecture of a dataset as well as the conventions of data entry and collection. Even more, given that legacy data may extend across a large timescale, the potential for multiple explicit and/or implicit changes to organizational and entry standards are possible. In the budget data corpus, this final point was certainly true.

../images/_white-paper/fund-structure-02-01.png

Figure 4: Budget data file architecture

After collecting the corpus and deciding to focus on adopted budgets, we initially extracted data from sample of years (1996, 1998, 2003, 2008, 2012, 2017) to understand the organization of these files and discern an architecture. From these years, we determined an overarching organization, of which we adopted a three-tiered structure (see figure 4 for details). The first tier are fund groups, or groupings of revenue and expenses based on how the money is procured and can be used. These fund groups are Governmental Funds (funds supported by taxes, grants, special revenue, and other sources), Proprietary Funds (funds related to operations and revenues from governmental services), and Fiduciary Funds (funds held in trusts or unable to support governmental activities, e.g. pensions). The second tier are funds, or allotments of money allocated to particular functions, such as the General Fund (money for departmental operations) or Special Revenue Fund (money for specific projects). The third tier are subfunds, or alloments of money for specific purposes, such as the Intergovernmental Grants Fund within the Special Revenue Fund.

Although this high-level architecture carries across the entire data corpus, variation within this framework required iteration on the specific elements. For example, between fiscal year 2013 and 2014, the Group Insurance Fund switched from Fiduciary Funds to Proprietary Funds, respectively. Where the Group Insurance Fund persisted across years in name, its funding sources changed and therefore it exists in two different fund groups. Even more, the shift changed the Group Insurance Fund from a subfund (Proprietary Funds > Internal Service Fund > Group Insurance (Sub)Fund) to a fund (Fiduciary Funds > Group Insurance Fund). In synthesizing data on the percent change between 2013 and 2014, we needed to annotate the data point with cell-level metadata. The annotation noted the change in where the fund was located as a caveat to the percentage change in the fund. At the organizational level, we decided to duplicate the fund name, thereby rendering the name of a given fund group, fund, or subfund no longer unique. This resulted in adding an additional unique identifier in a separate column. The unique identifier was necessary for machine processability as a data structure is most useful when it is well defined.

Another issue with regards to schematization arose from a shift in the timeframe represented by the data. In 2007, the fiscal year changed and was recorded in a one-page document found instead of a full budget for 2007. Budgets preceding 2006, adhere to a calendar fiscal year, spanning January to December. Budgets after 2007, adhere to a July-to-June fiscal year. As such, comparing budgets from 1996 to 2016 compares different timeframes, though still comparing fiscal years. In organizing data by year (columns correspond to documents from a given fiscal year), the current architecture obfuscates the change in what a fiscal year signifies. In this way, listing values by year make 1996 and 2016 comparable despite changes to their actual timeframe, thereby normalizing the data through schematization.

These different instances illustrate that schematization and normalization are an ongoing and iterative process. As data are added to a dataset, the organizational architecture, entry-level schemata, and processes of standardizing are tested. These new data reveal where the structures and standards are consistent, but also where modifications need to be made. Rather than indicating that the initial schemes and norms are incorrect, these iterative adjustments reveal that any schemes and norms depend on the scope and scale of the data. In order to make data machine-processable, one must adjust these structural features and standards to adhere to the specific demands of the machine process. Yet, adjustments require adequate documentation to reveal potentially obfuscated assumptions.

Insight 5: Documentation is not just about data but about process

Where data themselves can provide meaningful insights into phenomena, these insights depend on the quality of the data. Data quality stems to the granularity, collection method, frequency, and timeliness of the data in answering a particular research question. Some of these features of the data can be directly assessed from the dataset (e.g. granularity), but others—such as collection method, and when and by whom the data was produced—are only knowable through metadata. With a data corpus spanning many legacy systems, documentation standards often vary, leading to issues with verifying data quality.

Most often metadata provide valuable information about who, where, when, how, and occasionally why data are produced. According to Kitchin (2014), metadata fall into three categories: descriptive metadata, or data about what the dataset is called, what it includes, who produced it; structural metadata, or data about the organization and scope of the dataset; and, administrative metadata, or data about how the dataset was produced, including file format, ownership, and license. In our project we created metadata that describe the structure, collection methods, and contents of the dataset. In these files, we explain the ways the data changed overtime (e.g. the fiscal year shift), the norms and schemes for ordering the data (e.g. how unique identifiers work and the descriptions of the tiered structure), and where the data came from (e.g. what files are sources and where those files came from). Even with all of these metadata, still some aspects of the data production were missing.

Extracting and synthesizing required us to account for the particular page(s) where a datum was found. These annotations allowed us to document where we found errors or typos. Additionally, tracking where a datum came from offered a means for us to mitigate the introduction of our own errors through a transparent process. These cell-level (or entry-level) annotations constitute a fourth category of metadata—transformational metadata. These metadata track the actions taken to create a particular value or file. At this very granular level, metadata aid data quality by revealing the original source material (page numbers of where a value comes from), including errors, typos, and annotations about different synthesized values. These metadata offer insight into how the dataset was produced; that is, transformational metadata are data about wrangling itself.

In the case of legacy data, transformational metadata are vital. With a host of potential variations, artifacts, and even errors from different legacy systems, documenting how legacy data were wrangled allows data analysts and researchers to inspect data production. By doing so, the process becomes data that can be analyzed and verified.

Conclusion

Our project sought to answer what assumptions, barriers, and challenges exist within the sociotechnical practice of wrangling legacy data in the smart city? These five insights provide a series of conclusions about the pressing challenges for smart cities.

  1. Questions Drive Data: Although many claim open data hold nearly limitless insights, the project highlights that these insights are only as good as the questions being asked of data. Without a clear understanding of how data are useful for a given end, efforts to open data are more than likely to be aimless, reinforcing foregone conclusions rather than producing new insights. Even more, collecting data because they exist undermines the relationship between data and conclusions by confusing exploratory research with descriptive research. As such, being explicit about the questions driving the production of machine-processable OGD attunes conclusions and fosters different questions, thereby motivating data release.

  2. Internal Capacity/Knowledge Cannot Be Overlooked: With increased emphasis on public-private partnerships, or city-university partnerships, or even subcontracting, smart cities projects are often accomplished through the extension of local government capacity through a contractual relationship. As such, local governments may be missing out on driving the agenda of smart cities by not developing internal capacity. Where companies can move fast, governments move slow, and slowness can be an asset when it comes to institutional memory related to the particular governmental context. For smart cities to thrive, change—technological or otherwise—needs to be paired with a long-term strategy. Local governments can to be the bearers of that strategy and can only do so effectively by building internal capacity and knowledge to establish appropriate resources (or leverage existing resources), develop new programs and projects, and negotiate contracts that align internal best practices.

  3. Prepare for Change with Interoperability: The value of smart cities is derived from the comprehensive and integrated array of technologies and processes. With so much flux, taking a long-view on change is important. This long-view prepares for change by assuming no project, system, or dataset exists in isolation. Where companies sometimes (maybe often) push proprietary systems, local governments can push back and think about data ownership and governance in the terms of a different timescale. Here again, internal capacity and knowledge are vital. OGD are only a portion of the data landscape of a smart city. Open data, more generally, decouples data from systems. Although not always possible, establishing open data standards for all systems makes migration from one system to another easier by establishing an expectation of interoperability. Additionally, establishing open data standards also establishes administrative practices for wrangling data by creating expectations that data and systems require different attention and skills.
  4. The Smart City is the Documented City: If data are the fuel of the smart city, then metadata are the pipeline and logistics network. To foster insights from data, local governments need to set standards for not just the release of data, but the adequate documentation of data. Documentation allows smart cities to learn from their past rather than just react to the present.

Smart Japan: Observations from an interdisciplinary urban design studio

by Emma French

null
Shibuya Crossing in Central Tokyo, reportedly the busiest pedestrian crossing in the world

In March, I had the incredible opportunity to travel to Japan for a week with 18 students for Georgia Tech’s Smart City Urban Design Studio led by Professor Perry Yang. The purpose of the studio is to explore how smart city technologies and tools such as 3D GIS, urban energy modeling, eco district certifications such as LEED ND, IoT (Internet of Things), pervasive computing, and big data can be incorporated in design processes to support the shaping of ecologically responsive, resilient, and human sensing urban environments. Comprised of Georgia Tech graduate students in city planning, architecture, policy, industrial design, and interactive computing, the studio represents a collaboration between Georgia Tech, the Global Carbon Project (GCP), the National Institute for Environmental Studies (NIES), and the Department of Urban Engineering of the University of Tokyo.

At the beginning of the studio, we divided ourselves into four groups based on our interests and areas of expertise: Conceptual Design (mostly made up of architects), Performance Modeling (mostly planners), Smart City Computing (a mix of industrial designers, interactive programmers, and planners), and Community Engagement (planners and policy students).

Our task: To design a framework for the smart development of a satellite city called Urawa Misono in Saitama Prefecture, Japan. About 45 minutes from Tokyo by train, Urawa Misono is the last stop on the the Saitama Rapid Railway Line. Every two weeks thousands of REDs soccer fans swarm the station and walk or drive to the massive Saitama Stadium that was constructed in 2002 to host the FIFA championships.

null
Georgia Tech students walking from the train station to Saitama Stadium on their site visit in Urawa Misono

Saitama Stadium will be an important site for the 2020 Olympics, prompting local and regional officials to think about how they are going to accommodate the massive influx of people coming in for the games. Even without the Olympics, Urawa Misono’s current population of a little over 7,000 is expected to triple in size to over 32,000 by 2030. To top it all off, the national government has identified Urawa Misono as a potential site of smart development, leading to increased investment in the area by smart city leaders, such as Toyota and IBM.

Japan is already considered one of the smartest countries in the world, with its tech savvy population and concentration of tech conglomerates. Japan’s national Smart ICT Strategy published in 2014 by the Ministry of Internal Affairs and Communications laid out the country’s goal of becoming a global leader in ICT innovation by 2020.

We experienced many of Japan’s smart technologies in our first hours on the ground. From the public toilets that have heated seats and play music to ensure privacy, to the heated floors in our residence, the most obvious innovations seemed closely tied to individual human comfort. Other innovations, such as the rapid transit systems and compact residential developments focused more on efficiency and convenience than individual comfort.

mosaic

Top: Smart toilets in Japan allow you to adjust the temperature of the seat, play music, and flush by simply waving your hand in front of a sensor. Bottom: Japan’s extensive urban rail network transports 40 million people daily. Biking is so prevalent on the University of Tokyo’s campus that individuals are required to register for a parking spot at $20/month.

Due to the purpose of our visit, I found myself noticing things that I probably would have overlooked on any other trip. Things like the reflectors set up along the highway that eliminated the need for energy intensive overhead street lights. Or the six different types of trash and recycling receptacles lined up in Ueno park. Perhaps the most intriguing innovation was a road in rural area that we visited that played a song as your drove over it. The purpose of the musical road was to announce to visitors that they were entering a particular region known for its fruit production.

Our studio forced me to think not just about the initial purpose of these smart innovations, but also about their ongoing performance. Leading up to our trip, one of the biggest challenges for us as a studio had been effectively integrating the work being done by each of the subgroups into one coherent proposal. During an initial charrette we came up with our own parameters for a smart city, as one that is sustainable, adaptable, and equitable. Designing a framework for the development of such a place—in a country we knew very little about—proved exceedingly difficult. As the conceptual design team drew initial plans and the performance modeling group came up with performance measurements with which to evaluate those plans, the smart city computing team grappled with the challenge of creating adaptable public spaces and structures and the community engagement team attempted to use technology to communicate with residents in Urawa Misono to ensure that our studio’s proposals were grounded in local customs and needs.

The challenges faced by our studio—making our design proposals sustainable, adaptable, and locally relevant—are some of the fundamental challenges facing smart city initiatives around the world. While smart infrastructure has the potential to improve urban functionality, in order to create truly smart cities we need to be continuously evaluating them based on more than just technology deployment. A comprehensive, ongoing evaluation system, perhaps something along the lines of Bloomberg’s newly released National Standard for Data-Driven Government, is needed to ensure that smart city initiatives are not solely about technology, but also about achieving long-term efficiency, addressing local needs, and promoting equity.

To learn more about the Misono Smart City Studio check out of blog: https://waterfrontcities.wordpress.com/

Making Smart Communities: Streamlining Research, Development, and Deployment

Making Smart Communities: Streamlining Research, Development, and Deployment

by Jennifer Clark, Center for Urban Innovation

Jnnifer Clark US Energy and House Committee Image March 16, 2017 2

On March 16, 2017, I was invited by the US House Energy and Commerce’s Subcommittee on Digital Commerce and Consumer Protection to provide expert testimony about the importance of smart communities to commerce and infrastructure systems. The Committee held the “hearing to examine the ways that communities across the country are tapping into new technology and collaborating with private sector companies to deliver new initiatives that will improve safety, increase efficiency and create opportunity.”

My oral testimony at the hearing may be viewed on the Committee’s website and is part of the Committee’s “Disrupter Series” on emerging technologies. My full written witness statement is included in this blog post and also available on the Committee’s website.

Written Testimony on Smart Communities

Formal Citation: United States. Cong. House. Committee on Energy and Commerce. Subcommittee on Digital Commerce and Consumer Protection. Hearing on Disruptor Series: Smart Communities. March 16, 2017. 115th Cong. Washington: GPO 2017 (statement of Dr. Jennifer Clark, Associate Professor, Georgia Institute of Technology)

Summary

Smart communities are critical to the future economic competitiveness of the United States. Smart communities are not just an opportunity to increase economic growth but they present a challenge as well: Does the U.S. invest in intelligent infrastructure to build the 21st century economy and plan for what’s beyond?

The Federal Government has an important role to play in shaping the scope and scale of intelligent infrastructure investments going forward. In short, the Federal Government will decide the platform on which the national economy is built going forward and whether it meets 20th century standards or sets a new standard for the 21st century economy. Research universities have extensive experience partnering with industry and government on technology diffusion projects like smart communities. Research universities are built to test new technologies, evaluate alternatives, assess investments, evaluate economic impacts, measure distributional consequences, and certify processes, materials, products, and standards. As with any new enabling technology, research universities can play a role as a neutral third party with specialized technical expertise. Further, universities are embedded in local communities and have long-term working relationships with local and state governments and a vested interest in the presence of world class infrastructure in their own communities.

How to design and deploy intelligent infrastructure to efficiently and effectively support smart communities is one of the central questions going forward for the country as a whole and for local communities in specific. Building the replicable models and dissemination networks for the broad and sustained implementation of information and communication technologies into the next generation of national infrastructure is the opportunity and the challenge before us.

Introduction

“Smart communities” have captured the attention of popular audiences and experts alike. The “Smart City” concept promises access and opportunity as well as expanded services and increased efficiencies for local communities. The idea promises simultaneously to generate new revenue via new markets, products, and services and to save money through new efficiencies and systems optimization.   Advocates argue that smart communities are more efficient, more sustainable, more profitable, and more inclusive.

Economic geographers have long studied innovation as part of the broader disciplinary project of mapping and analyzing the spatial distribution of economic activities within and across cities, regions, and countries.  In recent years technology and innovation have gained privileged positions of prominence in these industry analyses. Researchers particularly focused on processes of technology diffusion and how regional economic ecosystems absorb new technologies and incorporate them into existing complexes of firms, industries, and industrial specializations.  In other words, how incumbent systems incorporate new processes, products, materials, and actors.

Smart communities are a challenge and an opportunity for the U.S. The challenge is to proactively engage the declining, incumbent national infrastructure system and not merely repair it, but replace it, with an internationally competitive cyber-physical system which provides not only an opportunity for better services for citizens but a platform for a 21st century, high tech economy and beyond.

Smart Communities and US Economic Competitiveness

Smart communities are critical to the future economic competitiveness of the U.S. Over 90 percent of the country’s GDP is generated in metropolitan economies — in cities and their suburbs. Smart communities are not just an opportunity to increase economic growth and opportunity but they present a challenge as well: Does the U.S. invest in intelligent infrastructure to build the 21st century economy and plan for what’s beyond? Or, does the U.S. miss the moment when targeted investment in integrating information and communications technologies (ICT) into infrastructure systems could form the foundation of an “Industry 4.0” level cyber-physical systems. The state of U.S. infrastructure and amount of funding devoted to it undermines U.S. global leadership in smart communities innovation and implementation. The American Society of Civil Engineers’ latest report card ranked America’s infrastructure at a D+, requiring $3.6 trillion in investment. The question is how can the U.S. plan a smart communities future, and the research and development necessary to support it, when there is such a critical gap in incumbent infrastructure systems?

The economic opportunity presented by smart communities is three-fold. First, the data produced by intelligent infrastructure promises to increase the reliability of local government services and performance of infrastructure systems. The data paves the way for building interoperable and cross platform systems that build efficiencies and ultimately allow localities to provide higher quality services at a lower cost. The result is the opportunity to expand services and maintain more reliable and efficient systems ranging from waste management to transportation.

The second opportunity is that smart communities data systems can enhance and inform the strategic planning capacities of local communities — large and small — with real world (continuous and real time) data on how infrastructure and infrastructure systems are used by citizens and businesses and how the infrastructure is performing. Local communities, businesses, and citizens will be able to see how their community is operating rather than model its functions based on past performance.

Further, the sharing of data amongst smart communities partners and participants helps to build networks for diffusing policy strategies and technology models. These strategic partnerships form the foundation for the third economic opportunity that flows from smart communities: entrepreneurship and market leadership. The data generated by and for smart communities systems (and the systems that produce that data) form the foundation of new enterprises and new products and services and, as a consequence, function as platforms for further economic development.

“Intelligent Infrastructure”: Next Generation Services and Structures

The promise of “smart” or “intelligent” infrastructure is that it will increase resilience across domains of critical infrastructure systems by expanding capacities and building resiliency through increased interoperability. In other words, by moving from a collection of discrete infrastructure systems to truly interdependent infrastructure ecosystems, the efficient, effective, predictable, and adaptive delivery of services will increase as well.

Across disciplines ranging from engineering to computer science to innovation policy, intelligent infrastructures are increasingly seen as solutions to the “wicked” problems that face local governments. These problems include how to respond to both long term and short term threats to resilience: 1) strained resources spread across ever growing urban populations, 2) aging infrastructures and public services systems, 3) competitiveness in the global economy, and 4) acute human and environmental stressors.

In recent years, governments ranging from dense urban environments to rural communities have made significant investments in smart and connected communities (SCCs), leveraging the capacity of information and communication technologies (ICTs) to improve existing operations and develop new services. The resulting “intelligent infrastructure” is dependent on a layer of new technologies to collect and store data, combine data from both fixed and mobile sensing devices, integrate existing data sets, and report the status of the city to user groups including businesses, governments, and communities. These new data streams come from connected, self-reporting, sensing devices (e.g. the Internet of Things, or IoT), citizen contributions (e.g. crowdsourcing), and municipal and official sources (e.g. open government data). These new capacities contribute to an increasingly complex system of users, platforms, interests, and information—with profound implications for systems design and governance.

This infrastructure presents particular challenges because it is integrated both into and across different critical infrastructures. From water and electricity systems and across built, natural, and socio-economic environments, robust intelligent infrastructure is increasingly required for the secure and resilient operations of government services and systems. As a consequence, this infrastructure-of-infrastructures presents a unique problem for critical infrastructure: how to integrate the capabilities and capacities of intelligent infrastructure into incumbent systems while mitigating interruptions, reducing exposure to threats, and ensuring continuity of service? In short, intelligent infrastructure requires attention in its own right as a new critical public infrastructure.

Intelligent infrastructure is quickly becoming central to the operations of critical infrastructure providing services ranging from water, to energy, to multi-modal transportation, to health, to communications. And, economic competitiveness is increasingly tied to the reliability and resilience of these critical infrastructure. Simply put, places without robust intelligent infrastructure systems will be left behind in the global economy because their critical infrastructure systems — utilities, energy, transportation, health, and emergency services — will be not be competitive compared to places who made the investments in cyber-physical systems to support operations.

Intelligent infrastructure directly impacts the management of systems through manual and semi- and fully-autonomous interventions, such as allowing changes to traffic lights during a period of heavy vehicle throughput. Intelligent infrastructure also indirectly impacts existing systems by providing information important to design, maintenance, and decision-making from operations to city planning and administration.

The products currently emerging in the context of smart communities are largely service-embedded goods built on a platform of critical infrastructure systems. In other words, smart communities cannot move forward without intelligent infrastructure. Smart communities require: 1) connectivity (reliable, predictable, interoperable, and upgradeable), 2) analytical services (expertise and assets to make data legible and useable), 3) data storage and management services (including security and privacy), and 4) open access to data through platforms and interfaces for citizens, entrepreneurs, and incumbent firms to build enterprises and expand engagement.

For example, a “smart cities object” — a trash can, a streetcar, a light pole, a traffic light — requires embedded sensors. Those sensors require connectivity (fiber, wireless, etc.). The object requires a service contract to maintain and manage that connectivity. Data analytics are required to manage the resulting data and perform analysis. Interfaces and visualization tools are required to make the data accessible to citizens and businesses. Smart communities are a market-making enterprise and failing to invest in intelligent infrastructure misses the opportunity to provide local communities with globally competitive roads, bridges, and transit but also abdicates the opportunity to build a new industry around the products, services, and systems developed on the platform of intelligent infrastructure.

Making Smart Communities: Streamlining Research, Development, and Deployment

The making of smart communities follows a model of technology diffusion familiar in the private sector context. This, however, is technology diffusion into a public-sector context where there is a necessary focus on the broad provisioning of reliable and efficient services and a consideration for building access to data for enterprise development. There are significant private sector participants in smart communities and some of these firms have created consortiums to offer communities integrated and interoperable packages of hardware, software, and connectivity services.

In the U.S., the national innovation system largely relies of publicly-funded basic research and development conducted within the network of world class research universities throughout the country. For decades, these universities have served as the research and development backbone of U.S. industry and of national defense. Research indicates that this national innovation has been effective in bringing forward new technologies and in facilitating the commercialization of new products, processes, and materials.

In the smart communities context, research universities are again serving an essential role in the research and development phase of smart communities innovation. At Georgia Tech, we are engaged in developing new policy models for smart communities as well as new technologies including data analytics, sensor networks, and operating systems. Through this research we have identified four key elements in smart communities technology projects: 1) Phased technical deployment to increase opportunities for in-action learning, community engagement and responsiveness, and integration of ongoing technical improvements, while simultaneously reducing the implementation burden on participating organizations, 2) Comprehensive administrative and technical strategies focused on interoperability that account for the necessary current and future need for systems to communicate and foster expansion over time, 3) Programmatic commitments to engaging the community at large, and to integrating concerns originating in everything from planning to technical specifications in meaningful ways and tailored to local conditions, 4) Established policies around open data and open innovation in order to ensure both continued access and local and regional economic development.

Local governments are focused on managing growth and change in their communities and providing services to citizens. Rarely do local governments have internal research specializations. Although some larger local governments have made recent investments in innovation delivery teams, information management teams, and resilience offices, these efforts remain focused on enhanced service delivery to citizens. Further, many of these efforts have been financed by philanthropic investments by leading national foundations interested in improving the quality of life and capacity for service delivery in local communities. In other words, even the exemplar smart communities programs are largely experiments with limited resources, limited timelines, and unclear scalability.

Research universities have extensive experience partnering with industry and government on technology diffusion projects. Research universities are built to test new technologies, evaluate alternatives, assess investments, evaluate economic impacts, measure distributional consequences, and certify processes, materials, products, and standards. As with any new enabling technology (biotechnology, nanotechnology, advanced manufacturing, photonics) research universities can play a role as a neutral third party with specialized technical expertise. Universities are also embedded in local communities and often have long-term working relationships with local and state governments. Research universities also have vested interest in the upgrading and maintenance of intelligent infrastructure in the cities and communities in which they are located. World class industry partners, star scientists, and the next generation of entrepreneurs all look for intelligent infrastructure to support their research and commercial enterprises. The absence of this infrastructure makes universities less globally competitive — for talent and for capital. And, as stated before, such absences make local communities less globally competitive as well.

Rather than stand up research and development divisions in every local government in the country in order to assess and deploy smart communities technologies, it would be reasonable to again turn to the nation’s network of world class universities, like Georgia Tech, to conduct the research and development work of smart communities and thus facilitate the path to deployment by local communities.

Finally, as research universities train the next generation of workers, citizens, and entrepreneurs, it is important to recognize that living and working in smart communities will be distinct from the built environment in which we live now. Whether the changes are immediately disruptive like autonomous vehicles or incremental adjustments to the skills required for living in and navigating the built environment (think automated grocery store check outs, smartphone based parking systems), investments in technical training for new and incumbent workers will be required to take advantage of the value-added these technologies bring to the labor market. Universities again will be critical partners in developing both these technologies and the skilled workforce required to capitalize on their contributions to national and regional growth.

Smart Communities Implementation and the Role of the Federal Government

In 2015 the U.S. Department of Transportation announced a Smart Cities Challenge for cities across the country. The competition was a “winner take all” grant which Columbus, Ohio won. But 77 other communities also applied for the grant. In other words, 77 local communities across the country pulled together strategic plans for implementing intelligent infrastructure systems in their own communities and tailored to their own needs. The Federal Government has long played an essential role in investing in infrastructure and in emerging technologies. Smart Communities combine both these roles. And communities across the country have demonstrated their readiness to move forward.

The Federal Government has several key roles going forward. First, as noted above, smart communities involve technology diffusion into a complex private sector and public sector space — and that space is also a place, a jurisdiction. The implementation of smart communities involves engaging real people in real places in real time. Therefore, flexibility and policy tailoring will be essential to successful implementation. What works in New York City is unlikely to be exactly what works in Columbus or Savannah or Dallas. One size will not fit all.

Although the Federal Government should not set a standardized approach, the Federal Government should consider developing technical standards and platforms for data, connectivity, and integration of hard infrastructure and information and communication technologies to protect citizens and consumers from excessive experimentation. The National Transportation Safety Board’s approach to guidance on autonomous vehicles is a good example of signaling to industry, local governments, and researchers about how to shape strategic planning and private investment while protecting consumers and citizens. The National Institute of Standards and Technology’s efforts to develop the global cities team challenge and convene industry, local governments, and universities to discuss and develop standards is an important start as well.

Because smart communities technologies cut across domains they also do not fit neatly under a specific federal agency. Many of the efforts to consider and support smart communities have been partial and ad hoc. The recent call for public comments by the Networking and Information Technology Research and Development (NITRD) Program on the “Smart Cities and Communities Federal Strategic Plan: Exploring Innovation Together” is a start at coordinating planning across the Federal Government.

Georgia Tech and the City of Atlanta are partners in a national network designed for developing smart communities policies and technologies with the scalability of those models to other local governments in mind. The MetroLab Network is a network of 38 cities, 4 counties, and 51 universities, organized into “city (or county) – university partnerships” focused on “research, development, and deployment” (RD&D) projects that offer technologically- and analytically-based solutions for challenges facing communities: mobility, security and opportunity, aging infrastructure, and economic development. One role for the Federal Government is in resourcing and institutionalizing these networked partnerships to support policy diffusion across communities and information exchange about how smart communities investments (programs, projects, and objects) perform as implemented. These networks allow local governments to achieve some economies of scale, build capacity, and avoid replicating mistakes or reinventing the wheel.

The Federal Government has an important role to play in shaping the scope and scale of intelligent infrastructure investments going forward. Simply put, the Federal Government will decide the platform on which the national economy is built going forward and whether it meets 20th century standards or sets the standard for the 21st century. There is a significant amount of basic research required to ascertain how to achieve the promise of smart communities. Some of that research can be resourced through programs like the Smart and Connected Communities program or the Critical Resilient Interdependent Infrastructure Systems and Processes (CRISP) program of the National Science Foundation. However, the current resources are modest investments in basic research and not of a sufficient scale to support the broad, national technology deployments necessary.

There is also a significant amount of applied research required to move smart communities technologies from design to development to deployment. There is a growing need for education and training. In research universities like Georgia Tech we are developing new curriculum to integrate teaching and learning about innovation and communities, technology and cities and regions. We are also investing in research centers, like the Center for Urban Innovation and the Institute for People and Technology, that take an interdisciplinary approach to moving innovations in engineering, sciences, and computing into a complex real world context of communities, entrepreneurs, and industries. How to design and deploy intelligent infrastructure to efficiently and effectively support smart communities is one of the central questions going forward for the country as a whole and for local communities in specific. Building the replicable models and dissemination networks for the broad and sustained implementation of information and communication technologies into the next generation of national infrastructure is the opportunity and the challenge before us.

Teaching Smart Cities: From Urban Policy to Urban Innovation

by Jennifer Clark, Center for Urban Innovation

4_april_1
A Sample Smart City from IDC Government Insights (2013), courtesy Smart Cities Council

The topic of smart cities — as a discourse and as a practice — came on the popular scene first with initiatives such as IBM Smarter Cities in the early 2010s and has since captured a much wider audience. Like many technology projects, smart cities have caught the public imagination as something novel. Self-driving cars are presented as “disrupting” transportation models and the built environment itself. And yet, self-driving cars are still just individual cars. They drive on the same streets that have defined the urban form for more than a century. They may influence the demand for parking but it is less clear what effect they will have on roads. If anything, such technology appears to be incremental, not disruptive. And, when policy expertise enters the conversation, we see the clear evidence of this obvious incrementalism.  

The growing interest in smart cities has presented some interesting questions to the academic community: Where does one learn about smart cities? Who teaches smart cities? What discipline or degree programs prepare students to design, implement, and evaluate smart cities?

“Smart cities” is rarely seen for what it is — a technology diffusion challenge operating in a dynamic and contested space between the public and the private sector.  The technology development will likely prove to be the easy part; it is the design and deployment of these models into this liminal space where governance, regulation, access, participation, and representation are all unclear and the “operating standards” are yet to be fully articulated that will prove to be the real challenge.

Smart cities present a very interesting challenge to teaching and to curriculum development in universities. This is a technology-intensive field which is fundamentally interdisciplinary and necessarily rooted in the social sciences. What makes cities are people — the choices they make, the places they go, the things they buy, and where they live and work. The built environment shapes those choices and urban systems facilitate or aggravate both movement across and living in cities. But at their core, cities are complex political, economic, and social systems. So, the challenge of smart cities is not one of technology alone. Indeed, most of the relevant technologies exist and currently operate in other contexts like manufacturing and defense. The question then becomes — beyond a grasp of the underlying technologies — what does one need to know to be a smart cities expert?

What are the prerequisites for studying smart cities? Does it require a background in data analytics? Civic computing? Civil engineering? Or, does the mastery of smart cities require knowledge of cities themselves? Stated another way, could you effectively study biotechnology without mastering organic chemistry or biology? Could you study astrophysics without an understanding of physics and mathematics?  

I began teaching university-level courses about how to study cities in 2004 at Cornell University. The first course I taught was an introduction to urban fieldwork tailored to undergraduate urban studies students. The course was intended to prepare students for careers that required understanding the actors and processes that shape the urban environment.  

Since then, I have taught many other courses on urban policy and urban and regional economic development at Georgia Tech. I have also coordinated a graduate concentration of the MSPP degree in public policy specializing in urban policy and anchored by a two semester course sequence PUBP 6604: Urban Policy Analysis and Practice and PUBP 6606: Urban Development Policy. And, in my experience, every year these courses change at the margins if not in their core content. These courses change because cities themselves are dynamic — what cities do and why and how changes over time and thus, so does the study of them. After teaching these courses for more than a decade, I see them now through the lens of the evolution of the field itself from urban policy to urban innovation.

The evolving nature of both the discipline and the practice has been highlighted to me through my evolving use of the two core books I have taught for several years in Urban Policy Analysis and Practice: 1) Basic Methods of Policy Analysis and Planning (a book I co-authored with colleagues in policy and planning disciplines), and 2) Fast Policy (a book co-authored by colleagues from urban and economic geography). Both books emphasize the speed at which policy analysis and policy diffusion occur and the role of institutions and analysts in speeding along policy change — and their corresponding responsibilities in slowing it down — to be more deliberate, assess alternatives, and make informed determinations about what works and what doesn’t and for whom. In other words, the need for urban innovation experts to understand efficiency, equity, distribution, and impact in addition to technology. Fundamentally, smart cities are about being smart, not just being high-tech.

In February 2016, the President’s Council of Advisors on Science and Technology (PCAST) released a major report “Technology and the Future of Cities.” The report outlined a strategy to guide federal investment and engagement in smart cities initiatives. Although the future of these initiatives and the impact of the original PCAST report in influencing investment is uncertain, the report itself revealed some interesting absences. Only a small number of the more than 100 contributors to the Future of Cities Report represented the perspective or expertise of the social sciences focused on cities and the urban scale: urban policy, urban planning, urban geography, urban history, urban economics, or urban administration.  

Historically, the array of social science fields focused on cities are sub-fields of much larger disciplines — economics, political science, geography, history. After decades of deindustrialization and disinvestment in cities, these sub-fields are not always the most popular or publicized. However, urban planning — to varying degrees — is the exception to the sub-field rule. Within urban planning, the consensus opinion has long been that urban planning is a discipline of its own. Its disciplinary boundaries run parallel to architecture in that there is a core curriculum, a professional master’s degree, professional certifications, and a clear professional practice. One is trained as an urban planner to work in urban planning. In other words, urban planning has rarely identified as an interdisciplinary project.  

As a consequence, “smart cities” as a domain, has emerged into the world of degrees and disciplines in which its home is likely to be fluid rather than fixed. Teaching smart cities will likely be a collaborative and interdisciplinary project with its core knowledge claims rooted in an understanding about the workings of cities and its novel value claims oriented around its interdisciplinarity and its integration of knowledge about not just technology but how technology can be used in the urban context.

For me and the curriculum I teach, the promise of urban innovation is exciting. I look forward to teaching urban policy as the landscape changes and smart cities becomes a centerpiece of investment and administration. Cities have never stood still. There is no reason why the curriculum about them should either.

People-Centered Planning in Smart Cities

By Emma French

The term “smart city” has become common parlance in city planning circles in recent years. While there is no universally agreed upon definition, descriptions of smart cities typically refer to integrated and interoperable networks of digital infrastructure and information and communication technologies (ICT) that collect and share data and improve the quality of urban life (Allwinkle and Cruickshank 2011; Batty et al. 2012). However unlike related concepts such as the digital city, the intelligent city and the ubiquitous city, the smart city is not limited to the diffusion of ICT, but also commonly includes people (Albino, Beradi, and Dangelico 2015).

Many of the technological enhancements propelling the smart city revolution require re-designing and in some cases re-building the underlying infrastructure that holds cities together. City planners will therefore play a significant role in the creation and implementation of many smart city initiatives. In a 2015 report on smart cities and sustainability, the American Planning Association (APA) purported that new technologies will aid planners by creating more avenues for community participation in policy and planning processes (APA 2015).


Public Participation in Planning

Widely-held conceptions of planning have shifted over the last century from normative, rational models that position planners as technical experts, toward a theoretical pluralism characterized by the political nature of planning, competing interests of stakeholders, and decisions as negotiated outcomes facilitated by planners (Lane 2005). These more contemporary models, most of which were first conceptualized in the 1960s and 1970s, view citizen participation as a key part of the planning process. Smith (1973) argues that participatory planning increases the effectiveness and adaptability of the planning process and that citizen participation strengthens our understanding of the role of communities in the urban system.

Meaningful public participation in planning has been found to better planners’ understanding of the community context (Myers 2010), improve decisions through knowledge sharing (Creighton 2005), increase trust in political decision making (Richards, Blackstock, and Carter 2004; Faga 2010), produce long-term support of plans (Levy 2011), enhance citizenship (Day 1997; Smith 1973), build social capital (Layzer 2008), and address complex problems through collaboration and consensus (Innes 2010; Godschalk 2010).

While these more contemporary planning models emphasize the importance of citizen engagement, achieving meaningful participation has proved difficult. Challenges preventing meaningful citizen participation include funding and resource constraints (Creighton 2005), literacy and numeracy (Community Places 2014), disinterest (Cropley and Phibbs 2013), lack of access to necessary resources (Cropley and Phibbs 2013), the prescriptive role of government (Njoh 2002), power inequalities within groups (Reed 2008), jurisdictional misalignment (Layzer 2008), and lack of respect for public opinion (Day 1997).

17-2-24In her seminal 1969 article, A Ladder of Citizen Participation, Arnstein uses examples from
federal urban renewal and anti-poverty programs to illustrate different manifestations of participation in practice (see figure to the right). Arnstein defines citizen participation as “the redistribution of power that enables the have-not citizens, presently excluded from the political and economic processes, to be deliberatively included in the future” (
Arnstein 1969, 216). Arnstein’s examples show how some efforts to include citizens in planning and decision making can perpetuate existing systems of power and actually further disenfranchise marginalized communities.

Glass (1979) attributes the dearth of meaningful citizen participation in planning and policy making processes to lack of attention to the design of participatory programs and a mismatch between objectives and techniques. Glass concludes that if the goal is just to get citizens to participate then almost any technique will be seen as sufficient. He argues that one technique alone is never enough and that meaningful citizen participation requires a continuous, multifaceted system of engagement (Glass 1979).

Technology-aided Participation

For decades scholars have been exploring ways that technology can enable meaningful participation in planning and policy making. Recent hype around “smart cities” has fueled the debate about the role of technology in these processes. Technology has been found to support citizen participation in planning by increasing participants understanding of issues and proposed plans (Salter et al. 2009), supporting collaboration (Jankowski 2009), strengthening the role of low-income residents (Livengood and Kunte 2012), and enabling alternative, informal manifestations of civic engagement (Asad and Le Dantec 2015).

Simply adding technology to the planning equation, however, does not always guarantee meaningful participation (Sylvester and McGlynn 2010; Epstein, Newhart, and Vernon 2014; Holgersson and Karlsson 2014). While the use of technology may address some barriers to participation in planning processes, it may actually exacerbate other barriers that stem from structural social, economic and environmental inequities.

Equity, Planning and Smart Cities

Despite the emphasis of meaningful citizen participation in planning, low-income, urban communities of color often still suffer from poor infrastructure, environmental degradation and exposure to toxins, and potential displacement due to rapid gentrification. A concern voiced by many critics of smart cities is that, like previous attempts to use technology to engage the public, the existing digital divide will likely limit use of smart city technologies to certain groups of people with certain resources and skills.

Using 2007 Pew survey data, Sylvester and McGlynn (2010) conducted four logistical regression models that try to explain the factors leading to individuals having “low access” to the Internet and how internet usage and physical location influence civic participation. They find that living in a rural area and being African American or Hispanic increase the probability that you will have low access to the Internet. Age was found to have a significant, negative effect on Internet access—meaning that the younger you are the more likely you are to have access to the Internet. The results also showed that people living in urban areas were more likely to contact the government by phone (Sylvester and McGlynn 2010).

The recent hype around smart cities is fueled to some degree by the rapid migration of people into cities. In 2014 ,fifty-four percent of the world’s population lived in urban areas and the World Health Organization estimates that by 2030 that number will be closer to eighty percent (WHO 2017). Atlanta is expected to grow by about 2.5 million people in the next 25 years; however, income inequality in the city is increasing and poor urban residents are being displaced by millennials and baby boomers (Coleon 2016).

This brings up a major concern regarding smart cities. Namely, who are we making cities smart for? If our efforts to make cities more efficient, safe, and clean result in the displacement of marginalized communities, are these cities really smarter than the ones in we live in now? No sensor can substitute for public engagement and responsive leadership. Agyeman and McLaren (2016) advise against the creation of tech hubs without a simultaneous strategy to protect and invest in affordable housing, basic services, and infrastructure.  

Adam Greenfield presents a similar, albeit more in-depth, critique in Against The Smart City, where he investigates three major international smart city urban developments and argues that the marketing materials and promises of the sponsors highlight their interest in this top-down, data-rich urban management system (Griffiths 2013).

The Role of Planners in the Smart City

In the APA’s Smart City and Sustainability Task Force survey, planners ranked socio-economic disparity as the second most important topic for planners working in smart cities (after green building and site design), suggesting that planners are aware of the importance of socio-economic stratification. But what can planners do to ensure that investments in smart city technologies are benefiting everyone equally, rather than sucking away financial and political resources needed to fix basic infrastructure issues? How can planners use these technologies to support more meaningful community engagement?

The existing literature suggests that even where technologies enable greater understanding of the planning issues or more meaningful engagement, they must be used in tandem with of traditional modes of planning such as in person meetings and design charrettes. Scholars also emphasize the need for ongoing, participatory mechanisms. Especially where institutionally-mediated participation falls within the first five rungs of Arnstein’s ladder, perhaps ICTs can play a role in supporting alternate, illegitimate forms of civic action that have a greater impact.

A Fireside Chat with Debra Lam, Incoming Smart Cities and Inclusive Innovation Managing Director

by Chris Thayer

30-jan-17
Debra Lam (via Chandler Crowell Photography)

At January 26th’s IPaT Town Hall, CUI Director Dr. Jennifer Clark sat down with Debra Lam, lately of Pittsburgh fame and now the Institute for People and Technology’s new Smart Cities and Inclusive Innovation Managing Director. Previously Debra led the City of Pittsburgh’s Department of Innovation and Performance, which was in charge of technology, sustainability, and performance of the city government. In this fireside chat, Dr. Clark asked Debra about her vision for smart cities, the collaborative potential between government and research institutions, and the potential impact of the changing national political climate on local efforts. This article is a transcript of that interview.

JC: For those of you that don’t know, I’m Jennifer Clark. This is Debra Lam, who we’re welcoming today. Debra is coming from Pittsburgh, where she was in charge of what was called the Innovation and Performance Team at the City of Pittsburgh, so when we found out that Debra was moving to Atlanta, the brainstorm we had was “What if we had the person who actually did so much in Pittsburgh on the City side of the city-university partnership come to Georgia Tech and help us manage the University side of our city-university partnership here in Atlanta?”

Some of you have read some of the discussions about Uber, and how Uber came to Pittsburgh to pilot its autonomous vehicle technologies. Actually, Pittsburgh has also developed — Debra developed — an Inclusive Innovation Plan for Smart Cities.  

So, I wanted to start by asking Debra a couple of questions, picking up a bit on what Beth just said about the changing political environment. Some people have the thought that with the changing political environment, cities generally, and Smart Cities in particular, may fall off the radar. But there are other people who argue that cities have been leading urban innovation from the ground up for many years. As someone on the front line, how do you feel about that? What’s your argument for being an optimist?

DL: Thank you, Jennifer. First of all, I’d like to thank you, all of you, for giving me a very warm welcome. I think all of us here — I’m a huge advocator for city empowerment and just see that cities are on the ground and accountable from a purely operational standpoint, in terms of just day to day operations like cleaning the streets and fixing the streetlights, right on to managing citizen accountability and responsibilities like that. So that makes us really on the thrust of not only trying to deliver, but delivering well. And being on that forefront, I think that’s an exciting place to be because the scale is easier to deliver on, and the sensitivities of being on the ground makes us more accountable. Accordingly I’ve been a firm advocate of the idea that it’s great that there are these great international actors, and great national actors, but whatever happens at the international and national levels, cities are still going to be at that front-lines position. Earlier, we talked about how cities have moved forward, proving their potential repeatedly — and it shows that, I think, whatever happens at this national climate means that we’re still going to lead the way forward.

JC: City-university partnerships are emerging as one of the key vehicles for designing smart cities and developing the systems and platforms essential for optimizing urban systems and expanding access and building opportunity. What do you think technology-focused universities like Georgia Tech bring to this enterprise? What are key roles that universities can (or possibly) should play?

DL: So, first of all, are people aware of the MetroLab Network? Do you guys know what it is?

No? No, okay. So, for those those that aren’t familiar, the MetroLab Network is a national partnership of almost forty cities and more than forty universities across the country that have committed themselves to doing applied research. Basically, trying to matchmake urban challenges with real expertise coming from a university and applying them on a wider scale. We started with our own Metro21 Partnership when I signed Memoranda Of Understanding with Carnegie Mellon University and the University of Pittsburgh. And that partnership really brought research to City Hall — it basically created an R&D Department within the City of Pittsburgh, which had never existed. We had everything from internships to semester-long projects to graduate research projects to faculty-sponsored grants, all funnelling into City operations to be applied for improvements in decision-making and performance. And that was really, I would say, a turning point in how we thought about innovation, because it allowed us to essentially decrease the risk of trying new things, because we have this university partnership, and to fast-track some of these innovations into City operations. And then from that partnership, we expanded it and launched MetroLab Network at the White House a couple years ago, during Smart Cities Week. And today, it’s transpired into that collection of forty-plus University-City partnerships, two of which Georgia Tech and Georgia State are also involved, and Jennifer is in the lead here in the City of Atlanta.

JC: We were talking earlier about Debra’s thinking about a Smart Cities ecosystem, and articulating how we should be thinking about the different pieces of a Smart Cities ecosystem. I wonder if you couldn’t share a little bit about what you think about that?

DL: First of all, I think this is an evolving space, and I think it’s new and growing space. What I found really great, and one of the reasons why I thought it was a great match to come to Georgia Tech, was that there was just a wealth of expertise all around Georgia Tech. And I thought, there’s so much we need to learn in terms of expertise. I really think of Smart Cities as a bigger ecosystem that involves a lot of different parts in collaboration in order to hit some alternate goals. In this ecosystem, there are certain resources or inputs that’s required in any context. These inputs could involve anything from data to technology, software to infrastructure — these are your basic components that cities are constantly looking at in terms of resources that are required to build a Smart City. But then these inputs require processes in order to be utilized. These processes involve new ways to improve efficiency, new ways to engage the public, whether it is for stakeholder engagement or innovative financial or business models, to think about how to find or procure these inputs, that technology or data. There are these processes that could become better or more efficient, or could really, really be more inclusive in thinking about where and how to serve the public, or different sectors of the community. But once you get into pursuing these different inputs and these process improvements, they ultimately lead to: Why do we do Smart Cities, at the end of the day? What is the ultimate goal of Smart Cities?

To me, Smart Cities is ultimately to improve the quality of life for residents. You can think of it as increased resilience, you can think of it as increased sustainability, you can think about it as increased equality, an increasingly just society — all those are goals that we’re striving for. There are certain inputs, resources that we need, processes that we can improve, but the reason why we are going towards a Smart City and all of us are collectively contributing, doing our part, is because we want to make a better world. Call it whatever you want, but that’s basically it. And that’s why I think that as part of the Smart Cities ecosystem, it is central that we are collaborative and integrative in our approaches. It’s hard to put people or areas into a specific box per se, but there are some of us that have great expertise in inputs, whether we are experts in sensors or technologies or data, and there are some who are really heavily involved in processes like stakeholder engagement or different ways of making financial models, and then there are some that are heavily involved in looking at what a just society looks like, or a resilient city looks like. Together, we, and I can say ‘we’ as Georgia Tech, really are formulating a true Smart City ecosystem, with players in all kinds of roles. That’s what makes Georgia Tech really powerful, to me. When we put all that together, we can create a really good narrative of what the Smart Cities should be, and how we could be on the forefront of driving Smart Cities — not just for the City of Atlanta, but for cities all over the world. Thank you, and I’m really glad you’re here.

 

Open Government Data Policies

by Emma French

Governments at many levels collect large amounts of data every year through their programs and daily operations. Fueled by the belief that data produced by any government is the property of the tax-paying citizens, the open data movement seeks to make government data easily accessible and available to the public. Advocates argue that opening government data can increase government transparency and accountability, enable meaningful citizen participation in policy and decision-making processes, and spur economic growth and innovation in unforeseeable ways.

Open data policies are being passed all over the world to institutionalize the culture of open data and maximize the potential benefits derived from releasing data. In the last decade there has been a notable increase in the number of open data policies passed in the United States (see Figure 1 below). In 2006, Washington D.C. was the first local government to pass an open data executive directive. In 2009 the first of two federal open government directives was issued by the Obama Administration, and local policies were adopted in Memphis, Portland and San Francisco. According to the Sunlight Foundation there are now at least two federal, ten state, nine county, and 46 city-level open data policies in the United States.

22-jan-17
Figure 1. This graph shows the number of open data policies (including local, state and federal) passed in the United States between 2006 and 2016. (Note: Some governments have passed multiple policies, often starting with an executive order and then moving to an ordinance or administrative policy. This graph counts each new policy individually regardless of whether or not a policy already existed in that city). Data source: Sunlight Foundation https://sunlightfoundation.com/policy/opendatamap/

Despite the importance of local policy, scant research has been done on the prevalence and effectiveness of open data policies at the city level. In an attempt to fill this gap, CUI researchers recently conducted a study to examine the variation that exists among city level open data policies in the United States. Twelve policies were assessed based on their potential to increase transparency, public participation, and economic innovation (Table 1).

Table 1. Selected Open Data Policies

City Population (2015) Year of Adoption Legal Means

 

Implementing Agency Stated Policy Purpose
Pittsburgh, PA

 

 

304,391 2014 Ordinance Open Data Management Team (new team incl. reps from each city dept. and chaired by the Chief of Innovation and Performance) Transparency; cross-sector coordination; local software innovation; government efficiency; open by default
Minneapolis, MN

 

 

410,939 2014 Ordinance Open Data Advisory Group (new team incl. Chief Information Officer and Open Data Coordinator from each dept.) Transparency; government efficiency; public participation; economic innovation; social progress; collaboration
Kansas City, MO 475,378 2015 Ordinance Chief Data Officer (reports to the City Manager) Transparency; Innovation by government, public or other partners
Tulsa, OK

 

403,505 2015 Executive Order Open Data Advisory Board (new team) Transparency; public participation; efficiency; economic opportunity
Chattanooga, TN

 

 

176,588 2014 Executive Order Open Data Advisory Group (new team incl. the Chief Information Officer and reps from each city agency); Office of Open Data and Performance Management (created 2015) Transparency; civic engagement; economic development; improved coordination and efficiency among cross-sector organizations
Cincinnati, OH

 

 

 

 

 

 

298,550 2014 Administrative Regulation Open Data Working Group (new team incl. Open Data coordinators from each of the city’s departments); Open Data Executive Committee (new team diff. from Open Data Working Group) Transparency
Baltimore, MD 621,849 2016 Ordinance Chief Data Officer; Department Open Data Coordinators Innovative uses by city agencies, the public and other partners
San Francisco, CA 864,816 2013 Ordinance Chief Data Officer; Department Data Coordinators Transparency; mobilize high-tech workforce to create civil tools and applications; social and economic innovation; empowering citizens to participate; job creation; public-private partnerships
New York City, NY 8,550,405 2012 Local Administrative Law Department of Information Technology and Telecommunications Transparency; intra- and inter-governmental interoperability; public participation; innovative strategies for social progress; economic opportunities
Washington D.C. 672,228 2014 Executive Directive Chief Data Officer (CDO); Open Government Advisory Group (new group incl. Mayor’s designee, the Chief Data Officer, and Director of the Office of Open Government) Transparency; public participation; collaboration; effective government; economic development; public trust in government
Charlotte, NC

 

827,097 2015 Administrative Policy Department of Innovation and Technology (existing group) Transparency; civic engagement; economic development; investment; public confidence in government
Houston, TX

 

 

 

2,296,224 2014 Administrative Policy Enterprise Data Officer (EDO); Open Data Advisory Board (new group) Transparency; civic engagement; cross-sector collaboration; efficiency; societal improvement; economic growth

The policies were analyzed by controlling for the transparency of the process through which they were created (open vs. closed) as well as the size of the city in which they were created (small vs. large). Ordinances were included in the open policy creation category, and executive orders and administrative policies comprised the closed category.

Three indexes were developed using proxies to assess the potential for each of the policies to increase transparency, public participation and economic innovation. For this study transparency is defined as the willingness of a government be open and accountable to the public. Public participation is the degree to which citizens are meaningfully involved in government policy and decision-making processes. Economic innovation is the degree to which citizens, entrepreneurs, and businesses are empowered to produce new innovative services and products. Table 2 below lists the indicators used for each index. Indicators with ‘SF’ by them were borrowed from the Sunlight Foundation’s Open Data Policy Guidelines.

Table 2. Indexes for Evaluating Open Data Policies

Transparency Index
Proactively release government information online (SF)
Create a public, comprehensive list of all information holdings (SF)
Specify methods of prioritization of data release (SF)
Stipulate that provisions apply to contractors or quasi-governmental agencies (SF)
Create central location for data publication (SF)
Require publishing metadata (SF)
Appropriately safeguard sensitive information (SF)
Public Participation Index
Incorporated public perspectives into open data policy making process
Require incorporation of public perspectives into policy implementation (SF)
Mandate data formats for maximal technical access (SF)
City has created an open data portal
Citizens can request new data via the website
Citizens can ask for help with data use via the website
City has offered free trainings on data access and use
Economic Innovation Index
Place data in the public domain or make available through an open license (SF)
Portal has an API to encourage developers to use the data
Competitions or hackathons to encourage use
Create/explore potential partnerships with other governments or institutions (SF)

This study’s findings support the claim that on average open data policies created through an open process have greater potential to increase transparency, public participation, and economic innovation than those created through a closed process. On average policies in larger cities scored higher in terms of transparency and economic innovation, however policies created in smaller cities scored higher in terms of their potential to increase public participation. Barriers to successful open data policies include restrictive licensing, closed formatting, privacy concerns and uneven access to the technology and knowledge to use open data. Policies that embrace meaningful transparency, public participation and cross-sector collaboration can support the creation of urban innovation ecosystems that promote use of open data.

Recommendations for governments creating an open data policy

  1. Address privacy concerns directly and proactively

Critics of open data will try to use this as a way to prevent opening up access to public data. In order to minimize this barrier it is critical that cities address privacy and security concerns up front.

  1. Be open, but also strategic

In order to realize the full economic and innovative potential of open data, open data policies need to require open formatting of data that allows for easy use, re-use, and integration. Data should be dedicated to the public domain or made available through an open license. Cities should make sure that restrictions are limited in order to maximize the potential for the data to be turned into public value. At the same time, it is important to be strategic when crafting policies and plans.

  1. Great policies aren’t enough

In order to transform open data into public value, cities need to collaborate across sectors and political jurisdictions. They need to start thinking about the public not as a client, but as a potential partner whose personal experiences can help inform the city’s work. The focus needs to be less on the supply-side, and more on the demand-side (Janssen, Charalabidis, and Zuiderwijk 2012; Conradie and Choenni 2014). Cities should be intentional about creating a culture of openness internally in order to nurture an ecosystem for open innovation more broadly (Schaffers et al. 2011).

Conclusions

Open data has no intrinsic value; rather, its value is dependent on its use. Open data policies can support cities’ efforts to increase transparency, public participation and economic innovation. However, policies alone are not enough to achieve these goals, and in some cases they may actually inhibit such innovation from taking place. The findings from this study support the claim that open data policies created though open processes have, on average, greater potential to increase transparency, public participation, and economic innovation than those created through a closed process. Cross-disciplinary and cross-sector collaboration were identified as integral to promoting greater interoperability and to expanding use of open data to support innovation. Future research is needed to evaluate the effectiveness of city-level open data polices, and to better understand the processes through which open data is used to create public value. 

Flexible Work, Flexible Work Spaces: The Emergence of the Coworking Industry in US Cities

by Thomas Lodato and Jennifer Clark

It is well established that flexible labor markets have changed work practices in the US. However, much less is known about how flexible work practices have produced and are producing flexible workspaces. Our research on coworking spaces illustrates how labor market flexibility has not only defined new employment practices but also created an emerging industry of coworking firms that provide workspaces — and workplace services — to a growing cohort of American workers for whom flexibility is a occupational norm rather than an occasional career condition.

Since the 1980s, economic geographers and industrial and labor relations scholars have documented how flexible work practices led to the reorganization of external and internal labor markets, redistribution of work processes, and renegotiation of employment regulations. These changes have affected how firms make strategic decisions about the spatial division of labor within the firm and how they deploy localized assets (work spaces) to manage an increasingly flexible workforce.

In our research we have constructed a database of 662 active coworking spaces within the continental United States.  From this sample, we analyzed the spatial distribution of coworking firms across the US.  From the set of 662 coworking spaces, we then created a geographically proportional subsample of 116 spaces to research more detailed information on the offerings, business models, and characteristics of coworking firms. Below we report our initial empirical findings.

Defining Coworking

First, defining coworking firms has been an empirical challenge for researchers. In an early study, Clay Spinuzzi argued that coworking — as a space — physicalizes the community and professional network many workers have been missing as freelancers, small business owners, and remote or contract workers. Here, we shift the approach to look at coworking through economic terms and focus on what coworking firms provide users.  In other words, we define coworking firms by how and in what ways they commodify workspace as a service. The table below defines the four key value propositions we identified through our analysis of the firms in our dataset: 1) Space-as-a-Service; 2) Community, 3) Professional Network, and 4) Work-Life Balance.  In our research, 100 percent of coworking firms provided 1) Space as a Service and 95 percent of coworking firms provided 2) Community. As a consequence we consider these two value propositions defining characteristics of the industry in its present form.

Value Proposition Description
Space-As-A-Service Access to affordable office space and office infrastructure (WiFi, furniture, HVAC, mailboxes, etc.)
Community Access to other workers who can provide important-yet-missing social interaction for freelancers, remote workers, contract workers, and small businesses
Professional Network Access to a network of both potential peers and clients, and access to opportunities to learn best practices and new skills, as well as find investment and new business opportunities
Work-Life Balance Access to a work style that allows for a better balance between the demands of a personal and professional identity


Mapping Coworking: Flexible Work in Cities

The first major finding from this research is that coworking is an urban phenomenon. Of the 662 spaces in our database, only one space is located outside of a US metro region. The vast majority of the remaining 661 spaces are located in major metro areas across the United States (see table and map).

We found coworking firms concentrated in large metro regions. This stands to reason because coworking firms, like temporary employment firms, will concentrate in places with large labor markets.  We tested the hypothesis that coworking firms were concentrated in places with a significant presence of “creative class” workers — the high-tech workers associated with narratives about workers who choose flexibility rather than permanent employment relationships.

We also looked at whether population growth corresponded with the rise of coworking spaces in a given region. The table below presents our initial findings.

Top 10 Metropolitan Statistical Areas with High Concentrations of Coworking Locations and Their Percent of “Creative Class” Occupations, 2016
Metropolitan Statistical Area Number of Coworking Locations Population, 2015 estimate (ranking)* Population Growth 2010-2015* Creative Class Location Quotient** Super Creative Core Location Quotient**
New York-Newark-Jersey City, NY-NJ-PA 65 20,182,305 (1st)

 

2.96%

 

1.12 1.10
San Francisco-Oakland-Hayward, CA 62 4,656,132 (11th)

 

7.15% 1.27 1.34
Seattle-Tacoma-Bellevue, WA 39 3,733,580 (15th)

 

8.26% 1.20 1.36
Los Angeles-Long Beach-Anaheim, CA 38 13,340,068 (2nd) 3.86% 1.06 1.08
Boston-Cambridge-Newton, MA-NH 32 4,774,321 (10th)

 

4.58% 1.21*** 1.10***
Washington-Arlington-Alexandria, DC-VA-MD-WV 32 6,097,684 (6th)

 

7.61% 1.48 1.53
Chicago-Naperville-Elgin, IL-IN-WI 28 9,551,031 (3rd)

 

0.84% 1.05 0.98
Denver-Aurora-Lakewood, CO 23 2,814,330 (19th)

 

10.17% 1.16 1.16
Nashville-Davidson-Murfreesboro-Franklin, TN 18 1,830,345 (36th)

 

9.21% 0.99 0.82
Atlanta-Sandy Springs-Roswell, GA 17 5,710,795 (9th)

 

7.67% 1.08 1.03
MSA Averages# 2 329,894 1.62%†† 0.92 0.92‡‡
*Annual Estimates of the Resident Population: April 1, 2010 to July 1, 2015, Source: U.S. Census Bureau, Population Division, Release Date: March 2016
**Combination of reported counts for occupation categories originally specified by Florida (2012) and later modified (Florida 2016) [see footnote 6 & 7]
***Occupation data collected for the Boston-Cambridge-Nashua, MA-NH Metropolitan NECTA.
#MSA Averages are calculated based on data available for all MSAs (LSAD M1), except for occupational reporting. Creative class and super creative core location quotients includes a combination of MSAs and NECTAs (LSAD M5).
Average number of coworking spaces includes MSAs where no coworking spaces were recorded. Actual calculated average value mean is 1.67 (median: 0; mode: 0).
††Median MSA population change: 0.86%
Median creative class location quotient: 0.89
‡‡Median super creative core location quotient: 0.87

We concluded from this analysis that neither the presence of creative class occupation nor the pace of population growth in a given metro area fully explains the growth of coworking firms.  The map below provides some additional support for our conclusion: coworking concentrates in urban labor markets, but variation across urban labor markets has yet to be fully explained.

picture1Our second major finding is that the coworking industry is comprised of two types of firms: single-location firms and multi-sited franchises. This is consistent with the practices that emerged in the temporary employment services industry where large firms such as Adecco and Kelly set up global franchise operations while local temporary service firms emerged in individual cities working in competition and in collaboration with the larger, multi-sited firms in the industry.

Coworking Firms and Number of Individual Operating Locations (Total), 2016
  Number of Coworking Firms Number of Coworking Locations
Total Firms 468 662
Firms with one site 418 418
Firms with between 2 and 5 sites 39 119
Firms with more than 5 sites 11 125

 

Geographic Coverage of Large Coworking Firms, 2016
Firm Name Active Spaces Within US Geographic Coverage
WeWork 48 National
Impact Hub 15 National
The Cove 11 Mid-Atlantic & Northeast
Industrious 10 Northeast, Mid-Atlantic, South, & Mid-West
ActivSpace 10 Pacific Division
Make Offices 10 Mid-Atlantic & Mid-West

Coworking is still a new industry so we do not yet have evidence of how and in what ways the large firms will interact with the single site locations and whether they will compete on the basis of service offerings. We did find that most coworking spaces are private firms that allow membership-based access.  Our assessment of the variation in offerings by firm versus by site indicates that there is little variation in core services at present.  Further research is planned to ascertain whether variation in core services is, in fact, driven by variation in the labor market (geography) rather than competitive firm strategies.

Frequency of Common Coworking Offerings by Firm and Location, 2016
Coworking Offering By Firm (Percent) By Individual Site (Percent)
Office Infrastructure

(Space-As-A-Service)

78 (100%) 116 (100%)
24/7 Access 60 (77%) 88 (76%)
Furniture 76 (97%) 114 (98%)
Wireless Network Access 77 (99%) 115 (99%)
Mailbox and/or Mail Services 45 (58%) 69 (59%)
Printing 61 (78%) 99 (85%)
Conference/Meeting Rooms 74 (95%) 112 (97%)
Meeting Tools 61 (78%) 98 (84%)
Coffee and/or Tea 73 (94%) 111 (96%)
Kitchen(ette) Access 50 (64%) 86 (74%)
Community#
(Social Interaction)
73 (94%) 110 (95%)
Professional Development##
(Professional Network)
54 (69%) 90 (78%)
Work-Life Support#
(Work-Life Balance)
49 (63%) 79 (68%)
 #Social Interaction refers to language on websites that refers to the benefit of being near other workers, either in terms of camaraderie or collaboration.
##Professional Development includes informal learning (e.g. lunch-and-learns), professional panels, networking events (e.g. meet-ups), and members-only events
###Work-Life Support refers to various listed amenities such as relaxation areas, gym access, bike storage, dog-friendliness, and wellness programs (i.e. on-site yoga or massage).

 

 

 

 

 

The Actually Existing Analog City

by Chris Thayer, Center for Urban Innovation

In 2014, my CUI colleague, Taylor Shelton wrote an article entitled the “Actually Existing ‘Smart City’,” which discussed examples of the idealized smart city, typically in the form of generating and installing new technologies and forms of data-gathering, which may be intended to support a particular civic purpose or might also apparently be for the sake of the technology itself. The article problematized the exclusively technical orientation of much of smart cities practice, but overlooked a deeper question that many researchers and policymakers in the emerging industry have ignored: what is, and more importantly, what should be the definition of this “smart city” that is thought to perhaps “actually exist” among current cities?

When such a question is proposed, of course answers will vary based on the background and experiences of the expert consulted. Mostly commonly, the smart city is held to be an urbanized district — not necessarily the entire city itself — with closely integrated technologies, such as sensor arrays and other data-collection tools and hi-tech efficiency boosts to existing systems. Those with a more public-minded bent may bring up civic hackathons and the ideal of greater connection between governments and the governed, though both the technological know-how (discussed here by CUI contributor Thomas Lodato) required to participate and the sometimes moribund nature of governments  in an era of neoliberal devolution makes the enthusiasm for such efforts perhaps misplaced. For those with an eye to the oft-neglected physicality of government provision, international smart cities development calls into sharp relief the risk of prioritizing “hot” (and often invisible) smart city interventions with expensive, time-consuming, unpopular infrastructure construction, even as other experts call for additional infrastructure emplacements in order to support further smart city developments. However, these approaches seldom consider the existing structure and nature of the cities upon which these new technologies are grafted.

This is hardly the first age in which an influx of new technologies, driven forward by commercial interests, drastically reshaped urban life as we know it. Certain changes are well-known and oft-discussed, such as industrialization’s reshaping and intensification of the density and bustle of city life, coupled with new high-quality steel, inspired everything from the dumbbell tenement to the beginnings of the city planning profession. Likewise, we are still feeling the ongoing effects of the changes begun a scant handful of decades later, when the personal automobile began exponentially accelerating the streetcar-driven stirrings of suburbanization, creating previously unimaginable, and perhaps fundamentally ungovernable, urban sprawls. Indeed, not only did cities create the original information technology of writing, as Townsend noted, but they were also undone by technology — and government mismanagement thereof — as early as 1788 B.C.E. In Ur (in Mesopotamia), a poorly-executed government crackdown on predatory lending practices (enabled by the financially sophisticated cuneiform-on-clay contracts of the time) resulted in major international trade haltage and an accompanying permanent loss in city wealth and status from which it was unable to ever recover. The pace and extent of technological integration has only increased since those earliest known foibles. Therefore, we must be cautious when considering our responses to — and integration of — the increasingly speedy, invisible, and powerful technological interventions associated with the current “actually existing” concept of the smart city, and our redefinition of the same, and the qualities of the city that currently exists in fact.

 

500px-quantization_error
Quantization noise: Information lost in digitalization. From Wikipedia

In this tension between the practical city as it stands today and the technological improvements the “smart city” concept advocates, we may see an echo of the divide between analog and digital signals. Indeed, much like cities, analog signals transmit information via continuous change. Cities aren’t finite, as digital signals are, and neither are they degradation-proof. What’s more, cities are capable of infinite variations, making the addition of “noise” all but guaranteed when attempting to digitize them, just like any other organic source of information. What remains to be seen is if the replicability of digital signals will carry over into the digital city — will we at last be able to see smooth, lossless policy transfers between these new technological marvels, or will their signals be blurred with metaphorical quantization noise, their efficiency reduced? The advocates of “smarter” cities must take care that their playground’s unique profile not be lost to homogenizing forces of universalized best-practices in the name of transnational interoperability divorced from practical necessity. Part of that mandate necessitates integrating all the functions of a currently existing city into the “smart” plan, not just the mechanical.

 

The recent PCAST report on smart cities echoes the industry-wide trend of smart city proponents privileging the technological over the social and exhibiting disinterest towards equity concerns and established urban studies fields in favor of “gadgets” and new datasets. Smart cities’ technocratic bent parallels the auto orientation of 1940s sprawl into suburbia, and is in particular similarly being pushed by the titans of industry best positioned to benefit, with little regard for the potential for scattered, incoherent “city-let” fragments left in the wake of the improved “urban development districts” (eerily similar to other “district” approaches that have floundered previously) they advocate. The reformers of the previous century — and it has been just about a century — also chased the City Efficient, though they had less sophisticated tools if no less enthusiastic a will. Without a careful consideration of what we do — and what we should — mean by the term “smart city,” however, we are liable to repeat their mistakes as we favor rapid progress over beneficial progress. It is wise to recall that, in some conceptions, government exists to “polish off” the rougher edges of the market, and that it is unlikely the watchman can watch itself in this instance — that is, to employ unmodified technological approaches taken directly from market solutions. Therefore, we must always keep in mind the need to broaden the definition of the smart city beyond the merely technical — indeed, perhaps directly into the kind of “education, healthcare, or social services“ delivery that PCAST handily dismisses.

Finally, in seeking a fuller definition we must consider how much of this smart city proposition — from the specific availability of data on up to the identification of the city as “smart” — is just so much more booster-istic smoke and mirrors? City machines have been lying to their constituents for centuries, generally with the very best kinds of lies — things that are technically true and also completely devoid of meaning. What’s more, one of the best ways to conceal something is to leave it out in the open, data dumped in the name of “openness” left in some forgotten corner languishing amongst so much more digital detritus, and it seems likely that many smart city transparency mandates — and smart city efforts in general — risk this eventual fate. This is where careful policy and outreach may step in for positive change, above and beyond the debatably effective civic hacking seen thus far. Despite Smart Cities author Anthony Townsend’s tidy definition of the smart city as a “[place] where information technology is combined with infrastructure, architecture, everyday objects, and even our bodies to address social, economic, and environmental problems,” in implementation it is not so simple. Even leading practitioners of smart city interventions such as sensor array testbeds and “sentient” homes have struggled to define this ambiguous term — and this difficulty need not be a negative. On the contrary, the moment of ambiguity that “smart city” is experiencing represents a unique opportunity to introduce intentionality to the definition and broaden it beyond its technocratic derivation. By remembering the lessons of our analog past — be it Mesopotamian or merely BetaMax in patina — we can shape a city that is not only “smart,” but truly wise in including the voices of and satisfying the core needs for all its citizens.

 

This article was originally written in response to a course assignment in a graduate course in urban policy analysis and practice offered in Georgia Tech’s School of Public Policy every Fall (PUBP 6604).