25.01.2013 Views

Studien - Lehrstuhl für Wirtschaftsinformatik

Studien - Lehrstuhl für Wirtschaftsinformatik

Studien - Lehrstuhl für Wirtschaftsinformatik

SHOW MORE
SHOW LESS

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

<strong>Studien</strong><br />

<strong>Lehrstuhl</strong> <strong>für</strong> <strong>Wirtschaftsinformatik</strong><br />

Technische Universität München<br />

ISSN 1612-2593<br />

Nr. 13<br />

Helmut Krcmar, Tilo Böhmann, Jan Marco<br />

Leimeister, Petra Wolf, Holger Wittges<br />

2 nd Workshop on Information Systems<br />

and Services Sciences 2008<br />

(2 nd WISSS 08)<br />

Herausgeber:<br />

Prof. Dr. H. Krcmar, Technische Universität München<br />

Institut <strong>für</strong> Informatik, <strong>Lehrstuhl</strong> <strong>für</strong> <strong>Wirtschaftsinformatik</strong> (I 17)<br />

Boltzmannstr. 3, 85748 Garching b. München<br />

Tel. (089) 289-19532, Fax: (089) 289-19533<br />

http://www.winfobase.de<br />

Garching, Juli 2008


Table of Contents<br />

Panel 1<br />

IT Governance & IT Controlling: A synthesis on the activities of<br />

ensuring and validating the value contribution of information systems<br />

Sonja Hecht, Stefanie Leimeister, Michael Schermann, Jörg Schmidl,<br />

Uta Schubert, Andreas Schwertsik, Armin Sharafi 1<br />

Panel 2<br />

Innovation Communities<br />

Ulrich Bretschneider, Michael Huber, Christoph Riedl 27<br />

Panel 3<br />

Developing Hybrid Products: Integrating Products and Services<br />

Sebastian Esch, Jens Fähling, Felix Köbler, Philipp Langer 41<br />

Panel 4<br />

Technology Acceptance Research – current development and concerns<br />

Mark Bilandzic, Uta Knebel, Daniela Weckenmann 57<br />

Panel 5<br />

Requirements Engineering, Prototyping and Evaluation in Information Systems Research<br />

Marina Berkovich, Holger Hoffmann, Maximilian Pühler 65<br />

Panel 6<br />

Performance of Information System<br />

Bögelsack, A.; Jehle, H.; Gradl, S.; Kienegger, H. 81<br />

Panel 7<br />

Workflow Management in Healthcare<br />

Mauro, C.; Mayer, M.; Rathmayer, M.; Sunyaev, A. 91


IT Governance & IT Controlling: A synthesis on the activities of<br />

ensuring and validating the value contribution of information<br />

systems<br />

Sonja Hecht, Stefanie Leimeister, Michael Schermann, Jörg Schmidl, Uta Schubert, Andreas<br />

Schwertsik, Armin Sharafi<br />

Abstract<br />

Literature Review<br />

Technische Universität München<br />

<strong>Lehrstuhl</strong> <strong>für</strong> <strong>Wirtschaftsinformatik</strong> (I 17)<br />

Boltzmannstr. 3 – 85748 Garching<br />

The objective of a Chief Information Officer is to ensure an effective support of the organizations’<br />

business processes. Not surprisingly, IS researchers have produced a large body of knowledge on<br />

these objectives. The goal of this paper is to make this knowledge base accessible and show<br />

worthwhile avenues for subsequent research. We derive our guiding framework from the foundational<br />

discussion on the contribution of information systems to organizations’ performance. We focus on the<br />

relationship between business processes and information systems. Governing and controlling the use<br />

of information systems are two important pivotal managerial activities influencing this relationship.<br />

Consequently, we analyze the relevant journal literature from Information Systems, Management<br />

Studies, and Accounting. Our review shows that despite a large body of knowledge, several research<br />

opportunities exist with the potential of significant contributions towards an effective use of IS.<br />

Introduction<br />

Although information systems seem to be an ubiquitous aspect of organizing modern business<br />

processes, the role of information technology in today’s organizations is still subject to debate among<br />

researchers and practitioners (Krcmar 2005). For instance, researchers have coined the term<br />

productivity paradox to denote the still ongoing discussion about the impact of information systems to<br />

the overall performance of the organization (Brynjolfsson 1996, 1998). Furthermore, the potential of<br />

information technology for strategic impact has been questioned. The argument goes that with its<br />

ubiquity information technology has become a commodity (Carr 2003, 2005a, 2005b). Despite of this,<br />

the effective use of information systems has dramatically changed the way business processes are<br />

being executed and innovative information technology has enabled completely new ways of doing<br />

business. Overall, it seems what largely determines the contribution of information systems to the<br />

overall performance of the organization is the effectiveness and efficiency of the way information<br />

systems are being used in the organization – the information management (Weill/Ross 2004).<br />

IS researchers have produced a large body of knowledge on the task of ensuring and validating the<br />

value contribution of information systems. However, different research designs, subtly different<br />

1


esearch foci combined with a heterogeneous terminology result in a poor map to this knowledge base.<br />

Consequently, the goal of this paper is to make this knowledge base accessible and show worthwhile<br />

avenues for subsequent research. Therefore, we have conducted a literature review and discuss its<br />

implications in this paper.<br />

We have organized the literature review as follows:<br />

2<br />

1. We selected the literature databases EBSCO, Science direct, and Palgrave to span the search<br />

area for the literature review.<br />

2. We searched in title, abstracts, and keywords of journals for keywords, e.g. “IT governance”,<br />

“IT Controlling”.<br />

3. Having the results, we focused on journals which are listed in the publications reference of<br />

the WKWI.<br />

4. Next, we read the abstracts of the resulting 145 articles. Articles which were clearly outside<br />

the scope of our review were excluded from further consideration. 56 articles were analyzed<br />

in-depth<br />

5. We used Google scholar to validate our resulting set of article. Furthermore, some authors<br />

already knew relevant articles that matched the scope of the review. We extended that<br />

literature base with seminal books and book chapters.<br />

Overall, a systematic and comprehensive approach was chosen to substantiate the following<br />

discussion.<br />

The remainder of the paper is organized as follows. In the next section, we develop a theoretical<br />

framework to guide the analysis of the identified literature. The framework suggests considering three<br />

main managerial activities. We first discuss related research on how to organize the support for<br />

business processes as the core activity of information management. Second, we analyze the body of<br />

knowledge on IT governance to systematize the results on how to ensure an effective organization of<br />

activities in information management. Third, we review the literature on IT Controlling to discuss the<br />

question of how to validate the contribution of information management. In the closing section, we<br />

discuss the implications of our synthesis for further research activities.<br />

Theoretical framework<br />

In the following, we synthesize the theoretical discussion on the contribution of information systems<br />

to the overall performance of an organization. The result is a theoretical framework that guides our<br />

review and analysis of the relevant literature (see Figure 1).<br />

Determining the contribution of information systems requires first to analyze the potential effects of<br />

information systems on the process of value creation. Davenport (1992) shows that information<br />

technology has the potential to enhance the productivity of the organization, e.g., by automating<br />

certain activities. However, Brynjolfsson (1998) shows that no direct effect of the use of information<br />

systems on the overall productivity of the organization could be substantiated (see the dashed line in<br />

between information management and value creation in Figure 1). Critical appraisal of this so-called<br />

productivity paradox (Brynjolfsson 1998) put the focus on the transformational effects of information<br />

systems. For instance, automating activities does not have any direct value contributions, but the<br />

affected and changed business process has (Picot et al. 2003). Not surprisingly, this transformational<br />

effect between information systems and value creation is in the focus of a large research stream in IS


esearch. Ward and Elvin (1999) show that linking the strategic objectives with the enabling<br />

functionalities of the information systems is vital for realizing the transformational potentials of<br />

information systems. These strategic objectives are determined from the organizational strategy<br />

(Mintzberg 1991; Porter 1991; Quinn/Mintzberg 1991). In sum, an alignment between the<br />

organizational strategy and the business processes is necessary to create economic value (Porter<br />

1991).<br />

Figure 1: Theoretical framework (adapted from Junginger (2004)<br />

Clearly, a similar alignment is necessary between the organizational strategy and the information<br />

management (Henderson/Venkatraman 1999). However, the transformational role of information<br />

technology also enables new strategic initiatives, e.g. electronic commerce (Piccoli/Ives 2004).<br />

Figure 1 shows that the contribution of information systems depends on a threefold fit. First, the<br />

information manager has to align his resources with the objectives of the organizational strategy.<br />

However, organizations’ information systems may serve a source for strategic initiatives. Second, the<br />

information management contributes to the efficiency and effectiveness of the business processes by<br />

providing adequate information services. Third, the internal organization of the information<br />

management determines the realization of the technological potentials (Brown/Grant 2005;<br />

Raghupathi 2007).<br />

The objective of information management is to realize this fit and manage the transformation of the IT<br />

organization towards this fit, respectively. As Figure 1 shows, the tasks of managing the relationship<br />

between the organizational strategy and the information management are subsumed under the term IT<br />

governance. IT governance is generally concerned with “specifying the decision rights and<br />

accountability framework to encourage desirable behavior in the use of IT“ (Weill 2002). On the other<br />

hand, activities in the realm of IT Controlling are concerned with gathering and analyzing data on the<br />

contributions of the information management to inform decision makers in information management<br />

and the organization. The third vital foundation is the relation between the capabilities and services of<br />

information management and business processes of the organization. According to Krcmar (2005)<br />

three main activities determine the structure of information management. First, the information<br />

management strategy sets the objectives. Based on these objectives, the structure of service delivery<br />

3


has to be determined. Here, an important aspect is the sourcing of IT resources (Dibbern et al. 2004).<br />

Third, the processes of service delivery need to be set up (Great Britain Office of Government<br />

Commerce 2002).<br />

In the following sections we review and synthesize the relevant literature with regard to these aspects<br />

of ensuring and validating the economic value contribution of information systems.<br />

Organizing the value contribution: A process-oriented view<br />

Definition of business process<br />

In order to reach a company’s strategic and operative goals, its actions need to be organized in what is<br />

generally called business processes. To get a better understanding of this often very broadly used term<br />

we first review different definitions of business processes. A fundamental basis for the understanding<br />

of business processes is given by the more general definition of a process according to the ISO<br />

organization which defines a process as a set of interrelated activities that transform input into results.<br />

Interpreting the results of a process as the benefit of the customer, Hammer and Champy (1994)<br />

define a business process as a set of activities, which require one or more inputs to create some value<br />

for the customer. In a similar way, Jacobson et. al (1994) focus on the property of creating value for a<br />

customer to define the term business process by stating ”[...] a business process is the set of internal<br />

activities performed to serve a customer. The purpose of each business process is to offer each<br />

customer the right product or service, with a high degree of performance measured against cost,<br />

longevity, service and quality”. Therefore the value creation for a customer is central to their<br />

definitions. On the other hand, a business process according to Scheer can be interpreted as a chain of<br />

events and functions, where events trigger functions (Scheer 1994). He emphasizes the temporal or<br />

logical order of actions that is intrinsic to business processes, which is also evident when considering<br />

the modeling notation he and his fellow researchers developed - the event-driven process-chains<br />

(Keller et al. 1992). Although his focus is on the operational character of a business process, he also<br />

acknowledges the need to take into account “receiver” of the business process execution’s outcome,<br />

i.e. the customer (Scheer 1998). Talwar (1993) also focuses on the sequence of activities and the<br />

specified outcome by stating that a business process is a “sequence of pre-defined activities executed<br />

to achieve a pre-specified type or range of outcomes” Finally, Davenport also emphasises the order of<br />

events by stating “a process is thus a specific ordering of work activities across time and place, with a<br />

beginning, an end, and clearly identified inputs and outputs: a structure for action” (Davenport 1992).<br />

Synthesizing the different authors’ views of business processes, the qualities of a business process are<br />

(1) focus on value creation for an internal or external customer, (2) ordering or sequence of activities,<br />

(3) processing of inputs to reach a defined output.<br />

As the ability of a company to contribute to the market and therefore to increase its value, depends on<br />

the way business is being done, especially in comparison to its competitors, the management and<br />

alignment of its processes is of uttermost importance.<br />

Related research on business processes<br />

After having defined what constitutes a business process, we now want to shed light on the<br />

importance of the enactment of business processes in a company’s environment. In today’s<br />

competitive environment, a company needs to draw its attention to manage its processes in every stage<br />

of the processes’ lifecycle. To focus on each phase of the lifecycle we chose to structure the identified<br />

articles according to the PDCA model– plan, do, check, act – by Deming (1982), that can be used to<br />

4


illustrate the lifecycle of a business process. Additionally we shortly illustrate general thoughts that<br />

influence business processes, but do not correspond to exactly one phase.<br />

Figure 1: The plan – do – check – act cycle according to Deming (Source: own creation)<br />

The “plan” phase<br />

Act<br />

Check<br />

In its initial phase the PDCA cycle addresses the need to model and design a process. Accordingly we<br />

now review some of the recent articles in renowned journals that deal with the aspect of how to model<br />

and design a business process.<br />

Having in mind the complexity of enacting a business process with the means of information<br />

technology it makes sense to rely on best practices early on. This especially applies when putting<br />

thought into the maintainability of the processes and its supporting infrastructure i.e. when designing a<br />

process that heavily relies on IT. ITIL is a collection of best practices for IT service support and<br />

delivery that facilitates the application of IT as a service for business processes (ITIL 2008). Its ability<br />

to align the business and the IT part of a company is therefore an interesting topic to study in an early<br />

phase of a process lifecycle. Kashanchi and Toland specifically research the ability of ITIL to align<br />

the business and IT parts of a company using the Strategic Alignment Model<br />

(Henderson/Venkatraman 1999) as a basis (Kashanchi/Toland 2006, S. 341). By conducting three case<br />

studies they derive the conclusion, that the adoption of ITIL can have significant positive influence on<br />

how the business part of a company interacts with IT to archive strategic and competitive advantage.<br />

Another dimension in modeling processes is taking into account the social perspective, i.e. the attitude<br />

of persons that are involved in the decisions for and the use of information systems in a business<br />

process. This is especially true in the information systems discipline that focuses on the interaction<br />

between computer systems and social behaviour. To this end, Wu investigated the determinates of<br />

senior management’s quite frequent reluctance to use information technology as an enabler and source<br />

of innovation for business process reengineering and not only as a means for automation (Wu 2003).<br />

To reach an understanding he utilizes the theory of reasoned action and conducted a three step<br />

approach. Wu could show that the treatment had significant effect and the behavioural determinates<br />

changed to a much more favourable outcome, showing that remedial actions have a relatively<br />

effective influence on behaviour. Thus when designing processes it seems to be imperative to<br />

sufficiently provide senior management with the necessary information.<br />

Finally when planning IT supported business processes, nowadays the collaboration with other entities<br />

in a company’s value creation chain needs to be considered. Therefore a central conclusion of<br />

Schubert (2007, 188) is that in the business environment of today “where customers expect fast and<br />

just-in-time delivery, business processes need to be electronically supported”. Furthermore she points<br />

out that for so called networked or virtual organizations, connections between information systems are<br />

Plan<br />

Do<br />

5


crucial. “For these companies which are jointly producing and offering products or services, a<br />

common electronic infrastructure where business processes are smoothly supported among the<br />

partners, is imperative to the generation of business value. Whereas virtual organizations show how<br />

far an electronic integration among companies can go, traditional companies are also confronted with<br />

the same requirements: faster and cheaper processes combined with higher data quality.” (Schubert<br />

2007). To achieve the aim of electronic integration business interoperability plays a major role.<br />

Legner and Wende define this interoperability as “the organizational and operational ability of an<br />

enterprise to cooperate with its business partners and to efficiently establish, conduct and develop ITsupported<br />

business relationships with the objective to create value.” (Legner/Wende 2006)<br />

The “do”-phase<br />

After having put some thought into how the processes should look like and how to implement them,<br />

enabling their enactment is the consequent next step in the PDCA model. Since a company’s<br />

processes ought to be aligned with and supported by its available IT resources, we first focus our<br />

attention on the support of IT on the execution of business processes. After that we address typical<br />

issues that arise in collaborations. The determination which kind of supportive IT infrastructure to use,<br />

in order to best support the current but also the anticipated future needs, is a challenge that today’s<br />

companies need to address. The adaptation of WebServices as enablers of a new generation of<br />

collaborative service provision has been the topic of choice for many researchers recently. This is why<br />

we now review a selection of articles that deal with this topic in the following.<br />

The strong link between the WebServices technology and the process management discipline is<br />

pointed out by Zhao and Cheng (2005). They stressed that according to the number of publications,<br />

WebServices are increasingly being researched, yet process management research is inclining as well.<br />

Considering the potentials of process-driven application integration and its implementation<br />

opportunities offered by WebServices, they deem the two research streams as highly correlated.<br />

Consequently they state three areas of research that are on the natural intersection of both approaches<br />

namely the extension of the technical foundation e.g. research giving rise to new protocols for QoS,<br />

security, etc., secondly architecture and application development, e.g. how to combine services and<br />

how to orchestrate them and finally strategic analysis, e.g. how to decide which services should be<br />

combined and which should not.<br />

In accordance to the close relationship stated by Zhao and Cheng, Moitra and Ganesh investigate one<br />

special aspect of this relationship – the ability of the WebServices technology (WS) to contribute to a<br />

flexible, business process (BP) oriented IT infrastructure and its impact on organizational adaptation<br />

(Moitra/Ganesh 2005). They derive some propositions that indicate that firm adaptation, flexible<br />

business processes and WebServices are each related to one another and that dynamic environments<br />

catalyse the relation of flexible business processes and WebServices. Also their results give rise to the<br />

relation of flexible BP / WS and ability to integrate new IT systems and BP respectively. They<br />

interpret their work as an augmentation of the resource based view of the firm (RBV) and the dynamic<br />

capabilities approach.<br />

Yet a more specific aspect of the interaction between business processes – dealing with the problem of<br />

semantic differences in business processes - is addressed in (Brockmans et al. 2006). The authors<br />

illustrate that whenever two or more partners of a value chain need to collaborate, it is essential that<br />

the understanding of terms that are used in the context of the business processes are aligned, e.g. that<br />

synonyms and homonyms are dealt with accordingly. To reach this goal, the authors suggest using a<br />

background ontology which is modelled in the web ontology language OWL and business processes<br />

that have been transformed to Petri nets. These artefacts are then used to compute the similarity<br />

between the two or more relevant Petri nets and entities that are above a defined threshold value are<br />

deemed to represent the same elements, e.g. to be synonyms. This way the authors hope to improve<br />

6


the alignment of different business processes and to facilitate the interoperability of cross-company<br />

business processes.<br />

The “check” phase<br />

Information systems not only support a company by facilitating the enactment of business processes<br />

and by this means increasing productivity, but they also offer a way for analyzing the daily actions of<br />

a company by allowing for a monitoring that enables the evaluation of the processes and their results.<br />

One aspect of the check phase is to determine the conformance of daily processes with their expected<br />

executions. To this end, the technique of process mining can be used. Linking back to the “plan”<br />

phase of the PDCA model that we build upon for structuring our literature review, the article by<br />

Rozinat and van der Aalst addresses the conformance testing of a business processes model and its<br />

enactment in real life (Rozinat/Aalst 2008; Rozinat/van der Aalst 2006). They build upon previous<br />

work in the area of process mining as described in e.g. (van der Aalst et al. 2004). The principal idea<br />

is to find process models by investigating logs of process aware information systems – any system<br />

that supports some notion of process like workflow management systems, web servers or even<br />

groupware systems. By analyzing the execution paths of different enactments they reconstruct a<br />

process model, which then can be compared to the modelled process thus checking the conformance<br />

of the real life enactment to the intended enactment patterns.<br />

The “act” phase<br />

Companies often face steadily changing challenges in dealing with their daily operations to satisfy<br />

their stakeholders. Consequently it is often necessary to change the way business is done, i.e. it is<br />

necessary to adopt a company’s business processes. Business Process Reengineering (BPR) has<br />

become a quite popular tool for coping with changing business processes, that deals with the analysis<br />

and design of processes and the corresponding workflows between and within organizations<br />

(Davenport 1992; Davenport/Short 1990). Attaran notes that the term reengineering has “first<br />

appeared in the information technology (IT) field and [having] evolved into a broader change process.<br />

The aim of this radical improvement approach is quick and substantial gains in organizational<br />

performance by redesigning the core business process.” (Attaran 2004). This is in accordance to<br />

Hammer and Champy, who first considered IT as the key enabler of BPR.<br />

Nevertheless many existing IT infrastructures are not very well suited for changing demands from the<br />

business processes they support. According to Aier and Schönherr (2006) mainly large enterprises<br />

have established complex and heterogeneous IT infrastructures that evolved over time<br />

(Aier/Schönherr 2006). Considering the background of new requirements replacing or reconstructing<br />

these complex systems cannot be considered as being realistic. Therefore the integration and<br />

interoperability of these infrastructures is an ongoing important topic. They describe the enterprise<br />

architecture as being composed of two columns: the organization architecture which covers all non<br />

technical elements and the IT architecture that contains all technical components (Aier/Schönherr<br />

2006) . Having this in mind, it comes out clear that there is a strong relation between BPR and IT.<br />

Attaran sees them as “natural partners”, whose “relationships have not been fully explored” yet.<br />

“Working together, BPR and IT have the potential to create more flexible, team-oriented,<br />

coordinative, and communication-based work capability. IT capabilities should support business<br />

processes, and business processes should be in terms of the capabilities IT can provide.” (Attaran<br />

2004). Also Henkel et al.agree that “Adding technical constraints to the design process means that the<br />

designed executable process needs to be aligned with existing software services.” (Henkel et al. 2004).<br />

They identified a major difference in the process design from a business perspective and from a<br />

technical perspective. The focus of the first is to solve problems that are expressed and in the second the<br />

aim is to leverage business support (Henkel et al. 2004). Wu considers IT as “both a strategic catalyst<br />

and enabler of process reengineering.” He summarizes that “unlike automation, reengineering is about<br />

7


innovation and it also requires recognition of the new, unfamiliar capabilities of IT for rethinking<br />

business process instead of its familiar ones. Many organizations are beginning to recognize the<br />

importance of IT strategy integration in process reengineering.” (Wu 2003).<br />

A practical example of the application of BPR is illustrated by Kobayashi, Tamaki and Komoda.<br />

Building upon the use case of fast adaptation of a supply-chain-management (SCM) process they<br />

argue that adaptation and thus process redesign in the SCM context consists of two steps – (re-<br />

)designing a business process itself and (re-)design of its supporting information system (Kobayashi et<br />

al. 2003). Therefore their proposed approach consists of two major steps: modelling the infrastructure<br />

and implementing it in a second step. Modelling is done hierarchically according to importance for the<br />

main goal (SCM improvement in their case), utilizing templates for the more important and also<br />

generic processes. Having defined the business process, now the data interfaces are modelled. In the<br />

implementation part, they propose to use a workflow that triggers as-is-processes and legacy systems<br />

by making use of adapters and thereby effectively promoting an EAI approach. Summing up their<br />

approach they propose a development process, that incorporates design phase (split into business and<br />

system design) and the implementation and that focuses on reuse where feasible. According to their<br />

case study in one company, their approach reduces the necessary time and effort for a change in the<br />

SCM context to one third.<br />

General considerations for business processes<br />

In an effort to categorize processes, Schubert builds upon the Porter taxonomy to define two types of<br />

business processes: primary processes which are “activities connected to value creation (e.g.<br />

procurement, production, sales)” and supporting (or secondary) processes that are “activities which<br />

support the operation of a company (e.g. accounting, finance, knowledge management).” (Schubert<br />

2007). Beyond that she emphasises that “[…] a company which wishes to offer excellent performance<br />

must draw appropriate business processes from its core competences. These business processes should<br />

be organised in such a way that the desired differentiating factors, when measured against the<br />

competitor’s performance, can be achieved.”(Schubert 2007). Henkel, Zdravkovic and Johannesson<br />

claim that organizations should design executable processes which “enable individual services to be<br />

composed to support new, complex business interactions. When designing executable processes,<br />

consideration must be paid to both business requirements, and the technical context that the process<br />

should be executed in”(Henkel et al. 2004).<br />

Explicitly considering the interdependencies between IT and business strategy, Kashanchi and Toland<br />

(2006, S. 340) deem it obvious that “organizations are investing in IT to achieve competitive<br />

advantage as well as profit. They have realized that by only investing in IT applications they cannot<br />

attain sustainable competitive advantage. However, to obtain that advantage they need to utilize IT<br />

functionality on a continuous basis” which means aligning IT and business strategy. The alignment<br />

gap between IT and business strategy is a critical issue as it can result in investing heavily in IT<br />

systems that do not meet business needs. “IT needs to be designed so that it has the flexibility to<br />

develop in new directions alongside business strategy” (Kashanchi/Toland 2006). Schubert (2007)<br />

claims “business processes and interoperability can be optimized through the use of business<br />

software” to reach process excellence (Schubert 2007). Nevertheless ERP systems are primarily used<br />

for supporting activities. She identified two different perceptions of IT. One faction believes, “[…]<br />

that IT has a particular potential in achieving competitive advantage. The other faction takes the view<br />

that the diffusion process of IT is now so advanced that it has already become a so-called<br />

‘Commodity’ or ‘Utility’” (Schubert 2007). Taking a look on software services Henkel, Zdravkovic<br />

and Johannesson point out, that technical constraint can influence the realization of business<br />

processes. However properly designed, “executable processes can be used to closely support business<br />

processes by the integration of existing software services” (Henkel et al. 2004).<br />

8


Ensuring the value contribution: IT Governance<br />

Overview<br />

For decades, every now and then spectacular failures of large information technology investments<br />

have been reported such as e.g. major enterprise resource planning (ERP) systems initiatives that were<br />

never completed or successfully implemented (Weill 2002). On the other hand, some companies<br />

constantly achieve above industry returns and success with regard to their IT investments. These<br />

contradictory reports lead researchers as well as IT practitioners to the conclusion that the IT<br />

performance of firms must be somehow related to good IT governance, i.e. an effective and efficient<br />

management of IT-related decisions (Brown/Grant 2005; IT Governance Institute 2005; Weill/Ross<br />

2004). As IT has become more important and pervasive, companies face the challenge to manage and<br />

control IT to ensure that value is created (Weill 2002; Weill/Ross 2004). IT governance has thus<br />

gained increased attention in the general IT strategy and management research stream (Meyer et al.<br />

2003) and is another essential element in IT and business alignment (Brown/Grant 2005; Luftman et<br />

al. 2006, S. 31). According to Weill and Ross who are among the most prominent researchers in this<br />

research stream, IT governance is defined as “specifying the decision rights and accountability<br />

framework to encourage desirable behavior in the use of IT” (Weill/Ross 2004). In contrast, the IT<br />

Governance Institute expands the definition to include underpinning mechanisms: “… the leadership<br />

and organisational structures and processes that ensure that the organisation’s IT sustains and<br />

extends the organisation’s strategies and objectives” (o.V. 2003). To some extent, IT governance is<br />

similar to the governance of other management areas, such as the financial management. In general,<br />

firms encourage particular desirable behavior that exploit and reinforce human, systems, and<br />

intangible assets that comprise their core competency in order to achieve their goals (Raghupathi<br />

2007; Rau 2004; Weill 2002).<br />

In general, four critical domains with key decisions for IT can be identified in the IT governance<br />

domain (Weill 2002):<br />

1. IT principles: they are high-level statements about how IT is used in a company. These<br />

principles capture the essence of a firm’s future direction and which role IT plays in this<br />

context (Broadbent/Weill 1997; Davenport et al. 1989)<br />

2. IT infrastructure strategies: they relate to shared and standard IT services including the<br />

network, help desk, customer data, and applications (Weill 2002)<br />

3. IT architecture: the architecture is a set of policies and rules that govern the use of IT and<br />

includes standards and guidelines for technology, use of data, design of application as well as<br />

IT-related processes (Weill 2002)<br />

4. IT investment and prioritization: these issues cover the whole decision process of where IT<br />

investments should be focused.<br />

According to the IT governance institute, there are five areas that IT governance should focus on with<br />

regard to key domains introduced above (o.V. 2003):<br />

1. Strategic alignment: linking business and IT so they work harmoniously together<br />

(Brown/Grant 2005; o.V. 2003)<br />

2. Value delivery: concentrating on optimizing expenses and proving the value of IT<br />

(Raghupathi 2007)<br />

9


10<br />

3. Risk management: Instituting a formal risk framework that gives some rigor how IT<br />

measures, accepts and manages risk, as well as reporting on what IT is managing in terms of<br />

risk (March/Shapira 1987; Raghupathi 2007; Smallman 1999)<br />

4. Resource management: optimizing knowledge and infrastructure (IT personnel, capabilities,<br />

and infrastructure assets)<br />

5. Performance measures: tracking project delivery and monitoring IT services<br />

With regard to the key areas mentioned above, one prevailing issue has dominated research for<br />

decades, long before more recent topics such as enterprise architecture management have come up as<br />

key drivers of IT governance: This issue is concerned with the general decision of a firm on how to<br />

provide IT for the organization and who should provide it. The term for these decision and<br />

management issues has been coined IT/IS (out-)sourcing. Since IT governance is a very widespread<br />

area of information management, for the purpose of this paper, only one aspect of IT governance is<br />

chosen to be investigated in greater detail. This aspect is IS outsourcing which will be described in the<br />

next chapter.<br />

IS Outsourcing<br />

The generic notion of outsourcing refers to a very general decision of an organization whether to make<br />

or buy certain products, services or parts thereof (Loh/Venkatraman 1992a, S. 9, 1992b, S. 336). The<br />

business practice of making arrangements with an external entity for the provision of goods or<br />

services to supplement or replace internal efforts has been around for centuries (Dibbern et al. 2004).<br />

The growth of the worldwide IS outsourcing market can be attributed to two primary phenomena<br />

(Lacity/Willcocks 2001). First, the increased interest in IS outsourcing is mainly a consequence of a<br />

shift in business strategy. Many companies have abandoned their diversification strategies to focus on<br />

core competencies that the organization does better than its competitors. As a result of this focus<br />

strategy, information systems came under scrutiny. Company executives often consider the IS<br />

function as a non-core activity with the underlying belief that IT providers have economies of scale<br />

and technical expertise to provide IS services more efficiently than internal IS departments. Second,<br />

the increase in outsourcing is a consequence of the unclear value delivered by IS. In many companies,<br />

IS is regarded as an overhead and (essential) cost factor. Thus, the refocus to core competencies and<br />

the perception of IS as a cost burden prompt many companies to engage in a variety of outsourcing<br />

arrangements (Dibbern et al. 2004).<br />

Multidimensionality of outsourcing<br />

Definitions of information systems outsourcing abound with little consistency or agreement in sight<br />

(Willcocks et al. 2007). Precise definitions of IS outsourcing differ in the IS literature (Glass 1996).<br />

The label outsourcing has been applied to “everything from use of contract programmers to third party<br />

facilities management” (Dibbern et al. 2004). The simplest definition of outsourcing is the purchase of<br />

goods or services that were previously provided internally (Lacity/Hirschheim 1993a). Information<br />

systems outsourcing thus could be viewed as the purchase of information technology services<br />

previously provided internally (Sargent 2006, S. 280). A more sophisticated and wide-spread<br />

definition of IS outsourcing is provided by Kern and Willcocks (Kern/Willcocks 2001) who<br />

understand information technology outsourcing as “the handing over to a third party of the<br />

management and operation of an organization’s IT assets and activities” (p. 1). For the purpose of this<br />

paper, we will use this definition as the working definition of IT outsourcing.


Areas or objects of outsourcing include the whole range of IT activities and have also changed over<br />

the decades. Outsourcing originated from the professional services and facilities management in the<br />

financial and operation support (Lee et al. 2003). To date, outsourcing areas are generally<br />

differentiated in hardware, software, and processes (Lee/Kim 1999).<br />

As the definition of IS outsourcing appears to be somewhat blurry, so are the various forms and<br />

arrangements of outsourcing that have evolved in the last decades. The multidimensionality of<br />

outsourcing has been tried to be captured by various authors, such as e.g. (Lacity/Hirschheim 1993b),<br />

(Millar 1994), (Wibbelsman/Maiero 1994), (Lacity et al. 1995), (Willcocks/Kern 1998),<br />

(Lacity/Willcocks 2001), (Lee/Kim 1999), and (de Looff 1998). They tried to build up classifications<br />

of sourcing options and characteristics. Their approaches to classify different outsourcing<br />

arrangements demonstrate on the one hand the manifoldness of the outsourcing field and the varieties<br />

to carry out an outsourcing venture. On the other hand, it clearly shows the difficulties to<br />

systematically depict different options along standardized categories or criteria. Most authors only<br />

focus on one or two specific aspects, if at all. While Lacity and Hirschheim (1993a) differentiate<br />

outsourcing arrangements by the vendor involvement and scope of outsourcing, Millar (1994) focused<br />

on the outsourcing degree as well as strategic aspects. Wibbelsman and Maiero (1994) instead point<br />

out the multidimensionality and describe a continuum of outsourcing options with regard to the<br />

involvement of external supply. Lacity and Hirschheim (1995) offer in a later approach sourcing<br />

decision options distinguished by the degree of external supply. Willcocks and Lacity<br />

(Lacity/Willcocks 2001; Willcocks/Lacity 1998) do not introduce categories, but rather describe<br />

current “trends of sourcing practices”. Another quite practice-oriented distinction is put forward by<br />

Lee (1999) addressing the ownership of the resources or the transfer of assets. At last, de Looff<br />

(1998) attempts to systematically depict IS functions along their functional, analytical, and temporal<br />

character.<br />

What adds to the jumble of categorization efforts is the fact that the use of certain outsourcing options<br />

is not clearly specified and outsourcing researchers have not standardized on definition of outsourcing<br />

arrangements. Willcocks and Lacity (Lacity/Willcocks 2001; Willcocks/Kern 1998), for example,<br />

define the option “value-added outsourcing” as combining the strengths of both outsourcing parties in<br />

order to market new products and services. In contrast, Klepper and Jones (1998) regard this type of<br />

outsourcing as an “intermediate” relationship characterized by complex work and substantial benefits.<br />

Under the same term “value-added outsourcing” Millar (1994) understands “some IS activity that is<br />

turned over to a third party vendor who is sought to provide a service which adds value to the activity<br />

that could not be cost-effectively provided by the internal IS of the client”. Instead, Millar (1994)<br />

defines “cooperative outsourcing” (i.e. “some IS activity is jointly performed by a third party provider<br />

and the internal IS department of the client”) in Willcock’s and Lacity’s understanding of value-added<br />

outsourcing.<br />

To systematize the different outsourcing options mentioned in the literature, we build upon and extend<br />

the work of von Jouanne-Diedrich et al. (2004; 2007; 2005) by integrating the different notions of<br />

outsourcing arrangements found in the literature. Figure 2 summarizes the categorization attempts<br />

mentioned above and characterizes outsourcing arrangements along nine dimensions.<br />

11


12<br />

Location<br />

Ownership<br />

Number of<br />

Vendors<br />

Offshore/Farshore<br />

Outsourcing<br />

NearshoreOutsourcing<br />

Asset<br />

Outsourcing<br />

Onshore/Domestic<br />

Outsourcing<br />

Multi<br />

Sourcing<br />

IS Activity<br />

Planning<br />

Onsite Outsourcing<br />

Service<br />

Outsourcing<br />

Single<br />

Sourcing<br />

Application Outsourcing<br />

Business Process Outsourcing<br />

Sourcing Object<br />

Development<br />

Infrastructure<br />

Outsourcing<br />

Knowledge Process Outsourcing<br />

Evolution of outsourcing research<br />

Implementation<br />

Operation<br />

Maintenance<br />

IS Sourcing<br />

Chronology<br />

Financial<br />

Dependency<br />

Internal/Captive<br />

Outsourcing , Spin‐Offs<br />

(Shared Service Center)<br />

Joint Venture<br />

External<br />

Outsourcing<br />

Total Insourcing<br />

Selective Outsourcing<br />

/ Outtasking<br />

Body Shop<br />

Co‐sourcing / Performance‐based<br />

Business‐benefit contracting<br />

Insourcing<br />

Outsourcing<br />

Backsourcing<br />

Total Outsourcing<br />

Transitional Outsourcing<br />

Transformational Outsourcing<br />

Value‐added /<br />

Cooperative Outsourcing<br />

Degree of<br />

External Supply<br />

Strategic<br />

Aspects<br />

Figure 2: Dimensions of Outsourcing Options<br />

(Source: extended from von Jouanne-Diedrich 2004; 2007; 2005)<br />

Early outsourcing research had centered on acquisition with a focus on the make-or-buy decision<br />

between in-house and external acquisition of information technology (Buchowicz 1991). But with<br />

Kodak’s 1989 outsourcing decision, outsourcing emerged as a key method of managing information<br />

systems (Loh/Venkatraman 1992b) and a pivotal issue concerned the motivation why to outsource.<br />

Debates then shifted from whether or not to outsource to how much to outsource (scope) with various<br />

options such as e.g. selective or total outsourcing (Lacity/Hirschheim 1993a). A difficult research<br />

issue pertained to determining the effective outsourcing performance (Loh/Venkatraman 1995). Due<br />

to the largely unsolved performance issue, research experienced a backlash of the outsourcing<br />

euphoria resulting in a discussion about insourcing and backsourcing options of information<br />

technology. But despite its critics, outsourcing was by then already entangled into most organizations’<br />

strategic plans and the contract specifying the relation between outsourcing providers and their clients<br />

emerged as a centerpiece issue (Saunders et al. 1997). The primary focus thus was on outsourcing<br />

determinants, benefits, vendor selection, and contracting with very little research investigating the<br />

outsourcing relationship and the processes required to support it (Sargent 2006, S. 280). Although<br />

outsourcing contracts were often designed in a complex fashion to cover unexpected contingencies or<br />

opportunistic service provider behavior, it was impossible to account for every possible scenario in a<br />

contract, and client-vendor interactions often went beyond rules and contractual agreements. Instead<br />

they additionally relied on soft factors such as trust, commitment, and mutual interest. A closer<br />

relationship between clients and their service providers emerged, recognized as partner-based or<br />

relationship outsourcing (Goles 2001; Kern 1997; Lee/Kim 1999; Willcocks/Lacity 1998). Many<br />

organizations engaged in this sort of partnership with their outsourcing vendors after experiencing the<br />

limitations of legal contracts (Kern/Willcocks 2000; Koh et al. 2004). As a result, an effective<br />

relationship rather than the actual service or outsourcing function itself became known as a key<br />

predictor of outsourcing success (Goles 2001; Lee/Kim 1999).<br />

Lee et al. (2003) graphically illustrate the evolution of outsourcing research issues from the plain<br />

“make-or-buy” decision towards the role of relationship issues and the emergence of a partnershipbased<br />

view in IS outsourcing (see Figure 3). The authors distinguished between two stages of IS


outsourcing evolution, the first being characterized as “driven by client’s self interest, shaped by a<br />

hierarchical relationship and dictated by a win-lose strategy” (Lee et al. 2003, S. 87). The second stage<br />

of IT outsourcing in contrast marked the beginning of a mutual exchange relationship orientation<br />

where the outsourcing vendor is regarded as a partner.<br />

<br />

Partnership<br />

Performance<br />

The choice between internally<br />

developed technology and<br />

its external acquisition<br />

Make<br />

or<br />

Buy<br />

Kodak‘s<br />

Effects<br />

Kodak Outsourcing<br />

Decision in 1989<br />

Diverse,<br />

integrated<br />

Limited, single‐focused<br />

solutions<br />

services and solutions<br />

(e.g. BPO, ASP, KPO)<br />

<br />

Partnership<br />

Scope<br />

<br />

Partnership<br />

or Not<br />

The impact of outsourcing;<br />

the benefit and risk of<br />

outsourcing; the Expectations<br />

towards outsourcing<br />

Degree of outsourcing;<br />

Period of outsourcing;<br />

Motivation Number of vendor;<br />

Outsourcing Types<br />

User and business satisfaction;<br />

Scope<br />

Service quality: Cost Reduction<br />

Performance<br />

Self interest of each party<br />

Client‐centered view<br />

Hierarchical relationship<br />

Win‐lose strategy<br />

Mutual interest; Partnership<br />

view; Equal relationship<br />

Win‐win strategy<br />

Insource<br />

or<br />

Outsource<br />

Trade‐off between<br />

contingent factors<br />

in outsourcing;<br />

1st stage<br />

Outsourcing<br />

backsourcing Well‐designed contract to<br />

reduce unexpected con‐<br />

tingencies; Service level<br />

agreements<br />

Contract<br />

(formal)<br />

as governance<br />

Partnership<br />

Key factors for outsourcing (Informal)<br />

partnership: Intangible elements<br />

of outsourcing relationship;<br />

Effective way for building client<br />

vendor relationship<br />

<br />

Partnership<br />

Motivation<br />

<br />

Partnership<br />

Contracts<br />

2nd stage<br />

Outsourcing<br />

Figure 3: Evolution of outsourcing research issues (Source: Lee et al. 2003)<br />

Validating the value contribution: IT Controlling<br />

The field of research concerning the controlling of IT/IS plays an enormous role in showing the value<br />

contribution of information systems. Some important aspects of IT Controlling are to coordinate,<br />

arrange and guide the division and to inform the management concerning efficiency and effectiveness<br />

of their IT-activities. IT Controlling is the planning and monitoring of projects in information- and<br />

communication technology as well as cost accounting and results accounting of projects, installed<br />

systems and applications. The implementation of IT Controlling should take an economical look at the<br />

resource information. Many different points of view on IT Controlling do exist in both research and<br />

practice (Vöhringer, 2002). Whilst in German scientific literature the term IT Controlling is unitary<br />

used for all of these points of view, Anglo-American literature differentiates a set of varying terms<br />

concerning the evaluation of IT value contribution (Schauer, 2006). Widely spread terms are: IT/IS<br />

(investment) evaluation, IT/IS (performance) measurement or measurement of IT/IS costs and<br />

benefits. Therefore, we consider the following passage from three different perspectives: the business<br />

view, the lifecycle view and the performance view.<br />

A business-oriented view<br />

Regardless of industry or management levels, IT/Business Alignment is an area of high priority. This<br />

was confirmed by a survey of 182 companies in the U.S. from 2003 to 2004 (Luftman, 2005). This<br />

high degree of prioritization may be the response to the lack of clearly identifiable and noticeable<br />

13


positive productivity trends that can be directly correlated to increased IT investment. This<br />

productivity paradox of IT was already described in 1987 by the economist Robert Solow and then<br />

discussed by a number of other authors with contradictory opinions (Mukhopadhyay et al., 1995;<br />

Brynjolfsson, 1996, Bharadwaj et al. 1999; Devaraj/Kohli, 2000). The ensuing discussions about<br />

increasing IT efficiency resulted in a call for an IT/Business Alignment. IT/Business Alignment is<br />

described as a long-term mutual coordination of business and information technology. One of the<br />

emerging problems is the time lag of IT behind changes in operational structuring. This is the result of<br />

the necessity for large effort to achieve the adjusted IT solutions, combined with fundamental<br />

unpredictability of consequences to changes in the existing architecture.<br />

The phase of business system planning (BSP) already began in the seventies. Particular attention was<br />

paid from the company's strategy derived to IT strategy. The partial model was developed by<br />

Henderson/Venkatraman (1993) to the Strategic Alignment Model (SAM). SAM acts with internal<br />

and external viewing objects. These are in a reciprocal relationship and are matched in the best way to<br />

each other.<br />

Beimborn et al. (2006) take a look at the IT/Business Alignment in respect to the Resource-based<br />

view (RBV). In the RBV view, the heterogeneity of resources account for the competitiveness of<br />

different companies. The controlled uses of resources create a competitive advantage. Beimborn et al.<br />

(2006) create up to the SAM building a research model that is described by Henderson/Venkatraman<br />

(1993) as the usage of IT, the division and their overlapping density (Henderson/Venkatraman, 1993).<br />

In this way they build an assortment of hypothetical effect relationships on IT usage (ITU), business<br />

capability (BUS), IT/Business Alignment (ITBA) and process performance (PP). They confirm with<br />

there research model to the theories of Hitt et al. (2001), that a better business capability (BUS) is able<br />

to increase the performance of the business process. The second highest contribution result is the<br />

correlation between ITBA to PP. This proves that better alignment between the professionals and IT<br />

department performance overall increased. Next were the theories of Devaraj/Kohli (2000) and Lee<br />

(2001) which supported that an extensive use of IT and the effective use of IT potential result in a<br />

higher performance of the business process. Beimborn at al. (2006) next set of hypothetical<br />

interrelationships that better alignment between the professionals and IT departments advance the<br />

impact of IT on the process efficiency and the theory that the effect of professional resources to the<br />

process efficiency could not be supported by them. Their results allow the paradoxical observations in<br />

the literature and enlighten the IT Paradox that is supposed to disappear.<br />

Teubner (2006) also based on the SAM and criticise the strong engineering side hitherto of the<br />

methods rapprochement. He suggests methodologies for integrated and application design. His<br />

emphasis on a production is the functional integration and strategic fit in context to SAM. He puts his<br />

methodologies - Information Engineering, Business Process Reengineering and Business Engineering<br />

- based on a comparative assessment. Information Engineering sees itself as a "(…) integrated set of<br />

techniques, based on corporate planning, which result in the analysis, design and development of<br />

systems which support those plans exactly" (Finkelstein, 1992). IE is a slightly expanded approach of<br />

the strategy execution of SAM. Business process reengineering is the next method of Teubner, it is<br />

already described earlier on in the paper. This more radical approach to IT-driven reorganization takes<br />

its start in the identification of business processes responsible perspectives of business strategy. The<br />

achievable technologies which support the business process innovation are the main focus of the BPR.<br />

Teubners last method is a direct response to the business process reengineering, because<br />

reorganization projects of the paradigm of BRP often fail. The Business Engineering (BE) aims to<br />

relative the radical redesign and process orientation of BRP and separates strategy, processes and<br />

systems. A feature of BE is that IT-potentials are actively implemented in IS-drafts.<br />

The construction of project specific methods especially in change projects is one research focus of<br />

Baumöl (2006). Even if the demand for standardization in order to minimise costs poses a significant<br />

14


potential requires the call for flexibility individualization in special context. Baumöl (2006) therefore<br />

recommends a standardization of separate activity levels which can be combined to an individual<br />

method.<br />

The following table provides an overview of the approaches selected by the authors and summarizes<br />

the evidence of the literature review.<br />

Author Research<br />

method<br />

Beimborn et al. Questionnaire Based on the SAM approach.<br />

(2006)<br />

Areas of IT usage, the department and<br />

their interactions<br />

Design, Approach Findings<br />

Teubner (2006) Literature review Based on the SAM approach.<br />

Information Engineering, Business<br />

Process Reengineering, Business<br />

Engineering<br />

Baumöl (2006) Literature review,<br />

Interviews,<br />

Case studies<br />

Association of conflicting demands<br />

for flexibility and for standardization<br />

Topics involved of the planning and<br />

initiation of a change project<br />

Individual and standardized design<br />

elements of a change method, which<br />

result in an appropriate IT/Business<br />

Alignment<br />

A design process for change methods<br />

of the above requirements<br />

Table 1: IT/ Business Alignment Approaches<br />

1. Correlation BUS to<br />

PP<br />

2. Correlation ITBA to<br />

PP<br />

3. Correlation IT to PP<br />

IT business alignment is<br />

a prerequisite for a value<br />

contribution of IT<br />

generally accepted<br />

Alignment is not to<br />

understand as the<br />

unilateral adjustment of<br />

IT to the business, but<br />

also driven by the IT and<br />

include reaction<br />

processes also.<br />

There are reference<br />

scenarios, which allow<br />

the reuse of methods<br />

fragments<br />

The literature of Alignment focuses mainly on the strategic coordination of IT and division.<br />

According to Gordon/Gordon (2000) there is a research deficit at the level of IT and business<br />

structure.<br />

A life-cycle-oriented view<br />

The following lines present a literature review of all of these aspects providing one example for each,<br />

acquisition, realization and operation. It takes both into account, German and Anglo-American<br />

literature of the last five years, trying to represent a variety of theories and methods used.<br />

IT investments constitute a major part of IT costs, but the benefits of IT usage are mostly not visible in<br />

a direct way and therefore difficult to justify. The value and the benefits of IT depend on its use in an<br />

organizational context. In order to make these benefits contribution visible, Kumar (Kumar 2004)<br />

presents a framework for assessing the business value of information technology infrastructure based<br />

on asset valuation literature in finance. Kumar starts his work with a literature review and identifies<br />

three IT value related research streams: studies using econometric techniques, traditional financial<br />

evaluation methods such as Net Present Value (NPV) or real options theory and research focusing on<br />

15


the organizational value of flexible IT infrastructures. Kumar’s framework is based on all three<br />

streams. As the value of IT depends on its usage within an organization, the value changes over time<br />

according to IT usage. The author differentiates drift in value changes (small), noise term changes and<br />

jump in value changes (large) and gives some examples of events causing these changes including<br />

resulting benefits and costs. He describes the changes, their causes and effects in a mathematical<br />

formula resulting in the stochastic value change of IT. Therefore the value of IT can be seen as<br />

dynamic instead of static.<br />

According to Zarnekow/Scheeg/Brenner (Zarnekow R.; Scheeg) IT investments are only responsible<br />

for about one fourth of IT costs. The major part of costs results from IT production – operation,<br />

support and maintenance. Although this cost structure is commonly known and concepts like total<br />

costs of ownership, life-cycle-costing, zero-based-pricing, all-in-costs or the cost-ratio-method exist in<br />

(production) literature, practice often neglects production costs. Zarnekow/Scheeg/Brenner (Zarnekow<br />

R.; Scheeg) use the life-cycle-cost model and adopt it in order to investigate IT costs. The model<br />

connects application related tasks with the steps of the application life-cycle-model. This way costs<br />

for the tasks can be accounted for a specific application. Secondly the authors conduct case studies in<br />

private companies and one public administration in order to describe application life-cycle-costs in<br />

practice. They could build the results of their study on 30 applications. The life-cycle contains five<br />

steps: planning, first development, production, further development and taking the application out of<br />

order. Although the organizations participating in the case study could not provide detailed cost<br />

information, findings show that higher costs for the development of a specific type of application<br />

result in lower costs during the production phase of the application. The case studies proved that<br />

managerial instruments used in practice are not sufficient for application cost accounting. In order to<br />

avoid false managerial decisions, life-cycle-oriented cost accounting models have to be developed and<br />

used. Existing cost accounting models do not work in practice, due to the differentiation between the<br />

development and the usage of IS instead of utilizing a holistic, life-cycle-oriented view. It is one<br />

challenge for IT Controlling to capture application related costs as a prerequisite for a life-cycleoriented<br />

view on IS. According to Zarnekow/Scheeg/Brenner (Zarnekow R.; Scheeg) their model is<br />

only the beginning for life-cycle-orientation, but has to be further developed.<br />

Doll (Doll 2003) shows a totally different approach to the benefits of IS/ IT. He develops a process for<br />

post-implementation (after an application is in operation) IT benchmarking. This approach is justified<br />

by the need of continuous learning from the past in order to be able to compete in the future. In order<br />

to build this process, Doll uses a causal model of how induced and autonomous learning factors<br />

influence IT usage and IT impacts on organization and processes. By using the causal model CIOs are<br />

able to identify learning problems and possible remedies. Benchmarking shows, how one company<br />

benefits from the impacts of IT usage, compared to another. For gaining a maximum benefit Doll<br />

adopts the typical benchmarking process identified in literature to the special needs of his causal<br />

model. The adopted process starts with the determination of the IT benchmarking purposes. Secondly<br />

the benchmarking objects have to be defined. Next the outcome/ process attributes to benchmark have<br />

to be identified. The fourth step involves the collection of internal data and in parallel the selection of<br />

external comparison data. Having all data collected one has to make both, competitive and internal<br />

comparisons. The next step analyzes the causal models in alterative (internal vs. best-in-class) data<br />

sets. Finally action plans have to be developed, implemented and resulting improvements monitored.<br />

The process proposed by the author enables the benchmarking of IT outcomes (effective use and<br />

impacts). IT benchmarking is an important tool in IT benefit analysis as it can “justify investments in<br />

infrastructure, provide a window to user satisfaction, improve the redesign of IT-enabled business<br />

processes, and enhance user learning or knowledge management” (Doll 2003).<br />

Figure 1 provides an overview about the presented papers, their authors, the life-cycle-step they work<br />

on and used theories, concepts and methods.<br />

16


Author IS life-cycle-step Used theories, concepts and<br />

methods<br />

(Kumar 2004) IT investments; could be initial<br />

investments or maintenance investments.<br />

(Zarnekow R.; Scheeg) IT production: operation, support,<br />

maintenance.<br />

Asset valuation theories:<br />

econometric techniques,<br />

financial evaluation (NPV,<br />

real options theory),<br />

organizational value;<br />

No theories but concepts and<br />

methods: total costs of<br />

ownership, life-cycle-costing,<br />

zero-based-pricing, all-incosts,<br />

the cost-ratio-method;<br />

(Doll 2003) Post-implementation. No theories but concepts and<br />

methods: benchmarking,<br />

induced and autonomous<br />

learning;<br />

Table 2: Life-cycle-oriented approaches<br />

The life-cycle-oriented view on IT controlling shows that IT controlling has to take both into account -<br />

IT costs and IT benefits. Becker (Becker/Winkelmann 2004) characterizes this evaluation of IT costs<br />

and benefits as obtaining, proceeding and analysing of data in order to prepare managerial decisions<br />

concerning the acquisition, realization and operation of hardware and software. The data used can<br />

contain quantitative monetary, quantitative non-monetary and qualitative data (Becker/Winkelmann<br />

2004).<br />

A performance-oriented view<br />

In regard to increasing resource commitment to information systems and the growing role of IT in<br />

achieving business goals, the measurement of the efficiency and effectiveness of IS are major topics<br />

in IT/IS management.<br />

This chapter will give an overview of different approaches that contribute to the question ‘How to<br />

measure the performance of the IS function?’ with focus on (1) the IS function in general and (2) the<br />

dependent variables and dimensions that can be used for the development of measurement models.<br />

“Performance measurement can be defined as the process of quantifying the efficiency and<br />

effectiveness of action.”. (Neely et al. 1995, p. 1229) This paper discusses performance measurement<br />

of the IS function. The term ‘IS function’ thereby refers to “….all IS groups and departments within<br />

the organization.”.(Saunders/Jones 1992, p. 64)<br />

There are a huge number of publications about IS performance measurement on subfunctional level,<br />

e.g. measurement of user satisfaction or information quality. Few publications within the last two<br />

decades try to suggest a comprehensive approach for performance measurement of the IS function. As<br />

the objective of this chapter is to give an overview about general constructs or models for IS<br />

17


performance measurement, the literature review focuses on approaches suitable for the measurement<br />

of the IS function in general, and not only a singular IS instance or subfunction.<br />

In 1992, DeLone and McLean (DeLone/McLean 1992) proposed the DeLone and McLean IS Success<br />

Model, that encompasses six major dimensions of IS success, including system quality, information<br />

quality, use, user satisfaction, individual impact and organizational impact. These six success<br />

dimensions are arranged within an interdependent success construct that proposes following<br />

dependencies: System quality and information quality affects both use and user satisfaction; use and<br />

user satisfaction impact individual performance, that should eventual have impact on organizational<br />

performance. Based on the success research contributions of the last 10 years, DeLone/McLean<br />

proposed an updated IS success model in 2003 (DeLone/McLean 2003). Major enhancements were<br />

adding a new dimension – service quality – and consolidating the dimension individual impact and<br />

organizational impact into the single dimension net benefit. According to DeLone/McLean, the<br />

stakeholder of these net benefits – e.g. an individual or an organization – has to be defined dependent<br />

on the context of use.<br />

Another contribution to IS performance measurement in 1992 was made by Saunders and Jones<br />

(Saunders/Jones 1992). They identified 10 dimensions that are critical for IS performance and set up a<br />

ranking for these dimensions dependent on their impact in IS performance. They were inspired by the<br />

contingency theory, that “…suggests that a number of variables influence the performance of<br />

information systems; the better the fit between these variables and the design and use of the MIS, the<br />

better the MIS performance.”(Weill/Olson 1989, p. 63). Saunders and Jones investigated the impact of<br />

organizational contingency factors like the hierarchical position of the top IS executive as well as<br />

contingency factors on the perspective of the IS evaluator. Based on the impact of these contingency<br />

factors, a relative ranking for the selected IS performance dimensions was set up. Compared to the<br />

DeLone/McLean model (DeLone/McLean 1992), Saunders/Jones did not propose causal relationships<br />

between the IS performance dimensions, but suggested that this issue can be addressed in future<br />

research in order to enhance their approach.<br />

In 1999, Martinson (Martinsons et al. 1999) proposes a balanced scorecard approach based on Kaplan<br />

and Norton (Kaplan/Norton 1996) for the measurement of IS activities including following<br />

perspectives: Business value, user orientation, internal processes and future readiness. For each<br />

dimension he proposed a set of different measures that were derived from several other approaches,<br />

e.g. information economics that was introduced in the late 1980s as well as different IS success<br />

approaches like that one of DeLone/McLean. He extended these approaches by the introduction of the<br />

future readiness dimension that was not regarded in that extent in earlier approaches.<br />

A further contribution to IS performance measurement was made by Heo and Han (Heo/Han 2003).<br />

Influenced by the contingency theory(Weill/Olson 1989), they developed a model in order to<br />

determine IS dimensions that are appropriate in evolving computing environments. The set of<br />

potential IS performance dimensions was already given by the use of the IS assessment selection<br />

model of Myers et al. (Myers et al. 1997), that basically combined the IS success dimensions of<br />

DeLone/McLean (DeLone/McLean 1992) and the contingency framework developed by<br />

Saunders/Jones (Saunders/Jones 1992). Furthermore they derived four IS structural typologies, e.g.<br />

centralized computing environment and decentralized computing environment. The relationship<br />

between the aforementioned IS performance dimensions and these typologies were tested in order to<br />

find out which dimensions are more appropriate then others to measure IS performance in a certain<br />

computing environment.<br />

In 2005, Chang and King (Chang/King 2005) proposed a functional scorecard for the measurement of<br />

IS performance. Their work was influenced by many earlier studies, e.g. that of McLean/DeLone and<br />

Saunders/Jones. Compared to earlier studies, they proposed a more detailed measurement model<br />

18


including 18 unidimensional factors within the three dimensions system performance, information<br />

effectiveness and service performance. The impact of these dimensions on business process<br />

performance, and the impact of IS performance and business process performance on organizational<br />

performance is assumed in the model, but research on these relationships was not main objective of<br />

their research paper.<br />

19


Author Research method IS performance dimensions Causal<br />

relationships<br />

(DeLone/McLean<br />

1992)<br />

(Saunders/Jones<br />

1992)<br />

(Martinsons et al.<br />

1999)<br />

(DeLone/McLean<br />

2003)<br />

(Heo/Han 2003)<br />

(Chang/King<br />

2005)<br />

- literature review<br />

of 180 articles<br />

- Delphi study<br />

- Senior executive<br />

interviews<br />

system quality, information quality, use, user satisfaction,<br />

individual impact, organizational impact<br />

IS impact on strategic dimension, integration of IS planning,<br />

quality of information, IS contribution to financial performance,<br />

IS operational efficiency, user/management attitude about IS, IS<br />

staff competence, technology integration, system development<br />

practice, IS ability of assimilation of new technology<br />

- 3 case studies business value, user orientation, internal processes, future<br />

readiness<br />

- review of 100<br />

articles<br />

- 1 case study<br />

- mail survey of<br />

187 IS managers<br />

- mail survey of<br />

120 firms<br />

system quality, information quality, intention to use, user<br />

satisfaction, net benefit<br />

service quality, system quality, information quality, use, user<br />

satisfaction, individual impact, work group impact, organizational<br />

impact<br />

system performance, information effectiveness, service<br />

performance, business process effectiveness, organizational<br />

performance<br />

Table 3: Approaches in IS performance measurement<br />

Yes<br />

Tested<br />

contingency<br />

variables<br />

No Organizational<br />

perspective of<br />

IS evaluator<br />

Yes<br />

Yes<br />

No IS structural<br />

typologies<br />

Yes


Table 3 summarizes the major developments in IS performance measurement of the last<br />

two decades. The main conclusion that can be drawn from the review of these articles is<br />

that there are two different kinds of approaches for the development of models for the<br />

performance measurement of the IS function:<br />

(1) Approaches that provide models with predefined and interrelated dimensions<br />

for performance measurement like that ones of (DeLone/McLean 1992),<br />

(DeLone/McLean 2003), (Martinsons et al. 1999) and (Chang/King 2005).<br />

These approaches focus on providing general templates for performance<br />

measurement that can be adapted and refined according to the object of<br />

measurement.<br />

(2) Approaches that provide a set of possible dimensions for performance<br />

measurement that that can be combined due to specific contingency factors of<br />

an organization like that ones of (Saunders/Jones 1992)and (Heo/Han 2003).<br />

These approaches focus on providing a ‘construction kit’ for performance<br />

measurement models, including rules that describe which dimensions should be<br />

chosen for performance measurement depended on individual contingency<br />

aspects of an organization.<br />

Basic for both kind of approaches is that there is no general IS performance management<br />

model that can be applied in practice without modifications, but there is always<br />

additional work required to adapt an approach for the individual requirements of an<br />

organization and the object of measurement.<br />

Outlook<br />

As our synthesis shows, several worthwhile avenues of research could be identified:<br />

• Although the characteristics of business processes and the activities towards<br />

effective information systems seem to be thoroughly discussed, two main<br />

research gaps could be identified. First, the integration of organizational<br />

knowledge from business processes in information systems and vice versa<br />

could tap potentials when it comes to business process redesign, continuous<br />

improvement, and organizational knowledge management. Second, research on<br />

deducing implications from analyzing and simulating business processes is<br />

largely missing. The potentials from simulating and subsequently learning form<br />

existing and planned business process may help to identify more accurate<br />

requirements for information system development.<br />

• In the area of IT governance several research opportunities could be identified.<br />

For instance, how to identify the trade-off between the efforts for implementing<br />

controls and their impact on the IT organization is still an open issue. With the<br />

focus on IT outsourcing, research on the structure and determinants of IT<br />

outsourcing is necessary to understand the relationship between service<br />

customer and service provider.<br />

21


22<br />

• IT Controlling includes different views on validating the contribution of<br />

information management to business performance. Following areas could be<br />

addressed for further research:<br />

o Regarding the evaluation of IT costs and benefits from a life-cycleoriented<br />

view, authors apply concepts and methods known from<br />

economic science - but they hardly do apply economic theories. Future<br />

research in the field could extend IS knowledge by using economic<br />

theories.<br />

o Regarding IS performance measurement a more comprehensive<br />

measurement model including findings from the last years should be a<br />

main objective for further research. Especially the integration of new<br />

findings on the relationship between IS performance and business<br />

performance, new findings on different contingency factors and new<br />

performance dimensions due to changes in the technological and<br />

organizational environment would be valuable for an extended model<br />

of IS performance measurement.<br />

• All three analyzed topics have revealed the need of an integrated and reliable<br />

information supply for management decisions. We argue that the information is<br />

available at nearly all IT organizations but efficient measures to collect and<br />

synthesize this information are missing.<br />

Although this synthesis is far from being complete, we analyzed three important aspects<br />

of effective and efficient information management and derived several research<br />

opportunities.<br />

References<br />

Aier, S.; Schönherr, M. (2006). Status quo geschäftsprozessorientierter Architekturintegration.<br />

WIRTSCHAFTSINFORMATIK 48 (2006) 3, S. 188–197.<br />

Attaran, M. (2004). Exploring the relationship between information technology and business<br />

process reengineering. Information & Management 41 (2004) 585–596.<br />

Becker, J.; Winkelmann, A. (2004). IV-Controlling. <strong>Wirtschaftsinformatik</strong>, 46(3), 213-221.<br />

Broadbent, M.; Weill, P. (1997). Management by Maxim: How Business and IT Managers Can<br />

Create IT Infrastructures. Sloan Management Review, 38(3), 77-92.<br />

Brockmans, S.; Ehrig, M.; Koschmider, A.; Oberweis, A.; Studer, R. (2006). Semantic Alignment<br />

of Business Processes. Proceedings of the Eighth International Conference on<br />

Enterprise Information Systems (ICEIS 2006), 191-196.<br />

Brown, A.E.; Grant, G.G. (2005). FRAMING THE FRAMEWORKS: A REVIEW OF IT<br />

GOVERNANCE RESEARCH. Communications of AIS, 2005(15), 696-712.<br />

Brynjolfsson, E. (1996). The Contribution of Information Technology to Consumer Welfare.<br />

Journal of Economic Perspectives, 14(4), 23 - 48.<br />

Brynjolfsson, E. (1998). Beyond the Productivity Paradox: Computers are the Catalyst for Bigger<br />

Changes. Communications of the ACM, 41(8), 49-55.


Buchowicz, B.S. (1991). A Process Model of Make vs. Buy Decision Making: The Case of<br />

Manufacturing Software. IEEE Transactions on Engineering Management, 38 (1), 24-<br />

32.<br />

Carr, N.G. (2003). IT Doesn't Matter. Harvard Business Review, 81(5), 41-49.<br />

Carr, N.G. (2005a). Does Software Matter? Informatik Spektrum, 28(4), 271-273.<br />

Carr, N.G. (2005b). The End of Corporate Computing. MIT Sloan Management Review, 46(3), 67-<br />

73.<br />

Chang, J.C.-J.; King, W.R. (2005). Measuring the Performance of Information Systems: A<br />

Functional Scorecard. Journal of Management Information Systems, 22(1), 85-115.<br />

Davenport, T.H. (1992). Process Innovation: Reengineering Work Through Information<br />

Technology (Hardcover). Harvard Business School Press Books (S. 1).<br />

Davenport, T.H.; Hammer, M.; Metsisto, T.J. (1989). How executives can shape their company's<br />

information systems. Harvard Business Review, 67(2 (March-April)), 130-134.<br />

Davenport, T.H.; Short, J.E. (1990). The New Industrial Engineering: Information Technology and<br />

Business Process Redesign. Sloan Management Review, 31(4), 11-27.<br />

de Looff, L.A. (1998). Information Systems Outsourcing: Theories, Case Evidence and a Decision<br />

Framework. In Willcocks, L.P.; Lacity, M.C. (Hrsg.), Strategic Sourcing of Information<br />

Systems (S. 249-282). Chichester: Wiley.<br />

DeLone, W.H.; McLean, E.R. (1992). Information Systems Success: The Quest for the Dependent<br />

Variable. Information Systems Research, 3(1), 60 - 95.<br />

DeLone, W.H.; McLean, E.R. (2003). The DeLone and McLean Model of Information Systems<br />

Success: A Ten-Year Update. Journal of Management Information Systems, 19, 9-30.<br />

Deming, W.E. (1982). Quality Productivity and Competitive Position. Cambridge, Mass.:<br />

Massachusetts Inst Technology.<br />

Dibbern, J.; Goles, T.; Hirschheim, R.; Jayatilaka, B. (2004). Information Systems Outsourcing: A<br />

Survey and Analysis of the Literature. The DATA BASE for Advances in Information<br />

Systems, 35(4), 6-102.<br />

Doll, W.J. (2003). A process for post-implementation IT benchmarking. Information &<br />

Management (41), 199-212.<br />

Glass, R.L. (1996). The End of the Outsourcing Era. Information Systems Management, 13(2), 89-<br />

91.<br />

Goles, T. (2001). The impact of the client-vendor relationship on information systems outsourcing<br />

success. Doctoral Thesis, University of Houston.<br />

Great Britain Office of Government Commerce (2002). Service Delivery. (5 Auflage). London:<br />

TSO The Stationery Office.<br />

Hammer, M.; Champy, J.; Künzel, P. (1994). Business reengineering : die Radikalkur <strong>für</strong> das<br />

Unternehmen: Campus-Verl.<br />

Henderson, J.C.; Venkatraman, N. (1999). Strategic alignment: Leveraging information<br />

technology for transforming organizations. IBM Systems Journal, 38(2/3), 472.<br />

Henkel, M.; Zdravkovic, J.; Johannesson, P. (2004). Service-based Processes – Design for<br />

Business and Technology. ICSOC'04. New York: ACM.<br />

Heo, J.; Han, I. (2003). Performance measure of information systems (IS) in evolving computing<br />

environments. Information & Management, 40, 243-256.<br />

IT Governance Institute (2005). COBIT 4.0: Control Objectives, Management Guidelines,<br />

Maturity Models. Rolling Meadows, IL, USA: IT Governance Institute.<br />

Jacobson, I.; Ericsson, M.; Jacobson, A. (1994). The object advantage: business process<br />

reengineering with object technology: ACM Press/Addison-Wesley Publishing Co.<br />

Junginger, M. (2004). Wertorientierte Steuerung von Risiken im Informationsmanagement:<br />

Deutscher Universitätsverlag.<br />

Kaplan, R.S.; Norton, D.P. (1996). Using the Balanced Scorecard as a Strategic Management<br />

System. HARVARD BUSINESS REVIEW, 74(1), 75-85.<br />

23


Kashanchi, R.; Toland, J. (2006). Can ITIL contribute to IT/business alignment? An initial<br />

investigation. WIRTSCHAFTSINFORMATIK, 48(5), 340-348.<br />

Keller, G.; Nüttgens, M.; Scheer, A.-W. (1992). Semantische Prozeßmodellierung auf der<br />

Grundlage "Ereignisgesteuerter Prozeßketten (EPK) (Arbeitsbericht): Institut <strong>für</strong><br />

<strong>Wirtschaftsinformatik</strong> Universität Saarbrücken.<br />

Kern, T. (1997). The Gestalt of an information technology outsourcing relationship: an<br />

exploratory analysis. Konferenzbeitrag: 18th International Conference on Information<br />

Systems, Atlanta, GA, USA,<br />

Kern, T.; Willcocks, L. (2000). Exploring information technology outsourcing relationships:<br />

theory and practice. Journal of Strategic Information Systems, 2000(9), 321-350.<br />

Kern, T.; Willcocks, L.P. (2001). The relationship advantage: Information technologies, sourcing,<br />

and management. Oxford: Oxford University Press.<br />

Klepper, R.; Jones, W. (1998). Outsourcing Information Technology, Systems and Services. Upper<br />

Saddle River, NJ: Prentice Hall.<br />

Kobayashi, T.; Tamaki, M.; Komoda, N. (2003). Business process integration as a solution to the<br />

implementation of supply chain management systems Journal of Information and<br />

Management, 40(8), 769-780.<br />

Koh, C.; Ang, S.; Straub, D.W. (2004). IT Outsourcing Success: A Psychological Contract<br />

Perspective. Information Systems Research, 15(4), 356-373.<br />

Krcmar, H. (2005). Informationsmanagement. (4 Auflage). Berlin: Springer.<br />

Kumar, R. (2004). A Framework for Assessing the Business Value of Information Technology<br />

Infrastructures. Journal of Management Information Systems, 21(2), 11-32.<br />

Lacity, M.C.; Hirschheim, R. (1995). Beyond the Information Systems Outsourcing Bandwagon:<br />

The Insourcing Response. New York, NY: John Wiley and Sons.<br />

Lacity, M.C.; Hirschheim, R.A. (1993a). Information Systems Outsourcing - Myths, Metaphors<br />

and Realities. Chichester, New York: John Wiley & Sons.<br />

Lacity, M.C.; Hirschheim, R.A. (1993b). The Information Systems Outsourcing Bandwagon. MIT<br />

Sloan Management Review, 35(1), 73-86.<br />

Lacity, M.C.; Willcocks, L.P. (2001). Global Information Technology Outsourcing. In Search of<br />

Business Advantage. Chichester, New York: Wiley.<br />

Lacity, M.C.; Willcocks, L.P.; Feeny, D.F. (1995). IT Outsourcing: Maximize Flexibility and<br />

Control. Harvard Business Review, 73(3), 84-93.<br />

Lee, J.-N.; Huynh, M.Q.; Kwok, R.C.-W.; Pi, S.-M. (2003). IT Outsourcing Evolution--Past,<br />

Present, and Future. Communications of the ACM, 46(5), 84-89.<br />

Lee, J.-N.; Kim, Y.-G. (1999). Effect of Partnership Quality on IS Outsourcing Success:<br />

Conceptual Framework and Empirical Validation. Journal of Management Information<br />

Systems, 15(4), 29-61.<br />

Legner, C.; Wende, K. (2006). Towards an Excellence Framework for Business Interoperability.<br />

Konferenzbeitrag: 19th International Bled eConference on eValues, Bled, Slovenia,<br />

Loh, L.; Venkatraman, N. (1992a). Determinants of Information Technology Outsourcing: A<br />

Cross-Sectional Analysis. Journal of Management Information Systems, 9(1), 7-24.<br />

Loh, L.; Venkatraman, N. (1992b). Diffusion of Information Technology Outsourcing: Influence<br />

Sources and the Kodak Effect. Information Systems Research, 3(4), 334-358.<br />

Loh, L.; Venkatraman, N. (1995). An empirical study of information technology outsourcing:<br />

Benefits, risk and performance implications. Konferenzbeitrag: Sixteenth International<br />

Conference on Information Systems, Amsterdam, 277-288.<br />

Luftman, J.; Kempaiah, R.; Nash, E. (2006). Key Issues for IT Executives 2005. MIS Quarterly<br />

Executive, 5(2), 27-45.<br />

March, J.G.; Shapira, Z. (1987). Managerial Perspectives on Risk and Risk Taking. Management<br />

Science, 33(11), 1404-1418.<br />

Martinsons, M.; Davisons, R.; Tse, D. (1999). The balanced scorecard: a foundation for the<br />

strategic management of information systems. Decision Support Systems, 25, 71-88.<br />

24


Meyer, M.; Zarnekow, R.; Kolbe, L.M. (2003). IT- Governance. Begriff, Status quo und<br />

Bedeutung. <strong>Wirtschaftsinformatik</strong>, 45(4), 445-448.<br />

Millar, V. (1994). Outsourcing Trends. Konferenzbeitrag: Outsourcing, Cosourcing and<br />

Insourcing Conference, University of California - Berkeley,<br />

Mintzberg, H. (1991). Generic Strategies. In Quinn, J.B.; Mintzberg, H. (Hrsg.)(S. 70-82).<br />

Englewood Cliffs, New Jersey: Prentice-Hall.<br />

Moitra, D.; Ganesh, J. (2005). Web services and flexible business processes: towards the adaptive<br />

enterprise. Information & Management, 42(7), 921-933.<br />

Myers, B.L.; Kappleman, L.A.; Prybutok, A. (1997). A comprehensive model of assessing the<br />

quality and productivity of the information system function: towards a theory for<br />

information systems assessment. Information Resources Management, 10(1), 6-25.<br />

Neely, A.; Mike, G.; Platts, K. (1995). Performance measurement system design. International<br />

Journal of Operations & Production Management, 15(4), 1228 - 1236.<br />

o.V. (2003). Board Briefing on IT Governance. Rolling Meadows, IL: IT Governance Institute.<br />

Piccoli, G.; Ives, B. (2004). Review: IT-Dependent Strategic Initiatives and Sustained Competitive<br />

Advantage: A Review and Synthesis of the Literature. MIS Quarterly, 29(4), 747-776.<br />

Picot, A.; Reichwald, R.; Wigand, R.T. (2003). Die grenzenlose Unternehmung: Information,<br />

Organisation und Management. Wiesbaden: Betriebswirtschaftlicher Verlag Dr. Th.<br />

Gabler.<br />

Porter, M.E. (1991). How competitive Forces shape Strategy. In Quinn, J.B.; Mintzberg, H.<br />

(Hrsg.)(S. 61-70). Englewood Cliffs, New Jersey: Prentice-Hall.<br />

Quinn, J.B.; Mintzberg, H. (1991). The Strategy Process: Concepts, Contexts, Cases.<br />

Konferenzbeitrag, Englewood Cliffs, New Jersey,<br />

Raghupathi, W.R. (2007). CORPORATE GOVERNANCE OF IT: A FRAMEWORK FOR<br />

DEVELOPMENT. Communications of the ACM, 50(8), 94-99.<br />

Rau, K.G. (2004). EFFECTIVE GOVERNANCE OF IT: DESIGN OBJECTIVES, ROLES, AND<br />

RELATIONSHIPS. Information Systems Management, 21(4), 35-42.<br />

Rozinat, A.; Aalst, W.M.P.v.d. (2008). Conformance checking of processes based on monitoring<br />

real behavior. (Vol. 33, S. 64-95): Elsevier Science Ltd.<br />

Rozinat, A.; van der Aalst, W.M.P. (2006). Conformance Testing: Measuring the Fit and<br />

Appropriateness of Event Logs and Process Models. Business Process Management<br />

Workshops (S. 163-176).<br />

Sargent, A. (2006). Outsourcing relationship literature: an examination and implications for<br />

future research. Konferenzbeitrag: 2006 ACM SIGMIS CPR conference, Claremont,<br />

California, USA,<br />

Saunders, C.; Gebelt, M.; Hu, Q. (1997). Achieving Success in Information Systems Outsourcing.<br />

California Management Review, 39(2), 63-79.<br />

Saunders, C.S.; Jones, J.W. (1992). Measuring Performance of the Information Systems Function.<br />

Journal of Management Information Systems, 8(4), 63-82.<br />

Scheer, A.-W. (1998). ARIS - Modellierungsmethoden, Metamodelle, Anwendungen. (3 Auflage).<br />

Berlin: Springer.<br />

Scheer, A.W. (1994). Business Process Engineering, Reference Models for Industrial Enterprises.<br />

Berlin: Springer-Verlag.<br />

Schubert, P. (2007). Business Software as a Facilitator for Business Process Excellence:<br />

Experiences from Case Studies. Electronic Markets, 17:3, 187 — 198.<br />

Smallman, C. (1999). Knowledge Management as Risk Management: A Need for Open<br />

Governance. Risk Management: An International Journal, 1(4), 7-20.<br />

Talwar, R. (1993). Business Re-engineering--a Strategy-driven Approach. Long Range Planning,<br />

26(6), 22-40.<br />

van der Aalst, W.; Weijters, T.; Maruster, L. (2004). Workflow Mining: Discovering Process<br />

Models from Event Logs. IEEE Transactions on Knowledge & Data Engineering, 16(9),<br />

1128-1142.<br />

25


von Jouanne-Diedrich, H. (2004). 15 Jahre Outsourcing-Forschung: Systematisierung und Lessons<br />

Learned. In Zarnekow, R.; Brenner, W.; Grohmann, H. (Hrsg.),<br />

Informationsmanagement. Konzepte und Strategien <strong>für</strong> die Praxis (S. 125-133).<br />

Heidelberg: dpunkt Verlag.<br />

von Jouanne-Diedrich, H. (2007). Die IT-Sourcing-Map: Eine Orientierungshilfe im stetig<br />

wachsenden Dschungel der Outsourcing-Konzepte. In: http://www.ephorie.de/itsourcing-map.htm,<br />

zugegriffen am<br />

von Jouanne-Diedrich, H.; Zarnekow, R.; Brenner, W. (2005). Industrialisierung des IT-Sourcings.<br />

HMD - Praxis der <strong>Wirtschaftsinformatik</strong>, 245, 18-27.<br />

Ward, J.; Elvin, R. (1999). A new framework for managing IT-enabled business change.<br />

Information Systems Journal, 9(3), 197-221.<br />

Weill, P. (2002). Don't Just Lead, Govern: Implementing Effective IT Governance (CISR Working<br />

Paper 326).<br />

Weill, P.; Olson, M. (1989). An Assessment of the Contingency Theory of Management<br />

Information Systems. Journal of Management Information Systems, 6(1).<br />

Weill, P.; Ross, J.W. (2004). IT Governance: How Top Performers Manage IT Decision Rights for<br />

Superior Results.<br />

Wibbelsman, D.; Maiero, T. (1994). Cosourcing. Konferenzbeitrag: Outsourcing, Cosourcing and<br />

Insourcing Conference, University of California, Berkeley,<br />

Willcocks, L.; Lacity, M. (1998). Strategic Sourcing of Information Systems: Perspectives and<br />

Practices. Chichester: Wiley.<br />

Willcocks, L.; Lacity, M.; Cullen, S. (2007). Information technology sourcing: Fifteen years of<br />

learning. In Mansell, R.; Avgerou, C.; Quah, D. (Hrsg.), The Oxford Handbook of<br />

Information and Communication Technologies (S. 244-272). Oxford: Oxford University<br />

Press.<br />

Willcocks, L.P.; Kern, T. (1998). IT outsourcing as strategic partnering: The case of the UK Inland<br />

Revenue. European Journal of Information Systems, 7(1), 29-45.<br />

Wu, I.-L. (2003). Understanding senior managementâ€s behavior in promoting the strategic role<br />

of IT in process reengineering: use of the theory of reasoned action. Information &<br />

Management, 41(1), 1.<br />

Zarnekow R.; Scheeg, J.B., W.; (2004). Untersuchung der Lebenszykluskosten von IT-<br />

Anwendungen. <strong>Wirtschaftsinformatik</strong>, 46(3), 181-187.<br />

Zhao, J.L.; Cheng, H.K. (2005). Web services and process management: a union of convenience or<br />

a new area of research? Decision Support Systems, S. 1-8.<br />

26


Innovation Communities<br />

Ulrich Bretschneider, Michael Huber, Christoph Riedl<br />

Submission for WISSS 2008 / 2<br />

Technische Universität München<br />

<strong>Lehrstuhl</strong> <strong>für</strong> <strong>Wirtschaftsinformatik</strong> (I 17)<br />

Boltzmannstr. 3 – 85748 Garching<br />

{bretschneider | hubermic | riedlc}@in.tum.de<br />

Abstract Virtual communities have been an inherent part of the internet for a long time. These<br />

virtual communities are now more and more used for developing all kinds of innovations. In the<br />

different fields, various forms of virtual communities have developed over the recent years. This<br />

phenomenon of innovation communities has been researched to some degree, yet a common<br />

definition and common understanding is lacking. This article tries to conceptualise existing<br />

innovation communities through a literature review. Our literature review clusters innovation<br />

community manifestations as they can be found in academic research and tries to elaborate how<br />

they are related and where they intersect. This review serves as a first step towards a common<br />

definition of innovation communities.<br />

Introduction<br />

Virtual Communities and the Phenomenon “Innovation Community”<br />

Virtual communities have been an inherent part of the internet for a long time. Multitudinous virtual<br />

communities have taken root in the Internet since the establishment of “The WELL”, which was one<br />

of the first and the most famous virtual communities and is therefore often referred to as the primal<br />

community (cp. Rheingold, 1993).<br />

The scientific world has attended to this topic since the middle of the nineties of the last century.<br />

Nevertheless, no commonly accepted definition of the phenomenon “virtual communities”, in<br />

literature often also called virtual alliance, online community and virtual groups, has become widely<br />

accepted until today.<br />

Table 1 pictures the most prominent definitions from a general point of view.<br />

27


28<br />

Author Definition<br />

Hagel III/Armstrong,<br />

1997, p. 143<br />

Table 1 Definitions of virtual communities.<br />

„Virtual Communities are groups of people with common<br />

interests and needs who come together online.”<br />

Preece, 2000, p. 10 „An online community consists of: 1) People, who interact<br />

socially as they strive to satisfy their own needs or perform<br />

special roles, such as leading or moderating. 2) A shared<br />

purpose, such as an interest, need, information exchange, or<br />

service that provides a reason for the community. 3) Policies, in<br />

the form of tacit assumptions, rituals, protocols, rules, and laws<br />

that guide people’s interactions. 4) Computer systems, to<br />

support and mediate social interaction and facilitate a sense of<br />

togetherness.”<br />

Blanchard/Markus,<br />

2004, p. 65<br />

“[…] groups of people who interact primarily through<br />

computer-mediated communication and who identify with and<br />

have developed feelings of belonging and attachment to each<br />

other.”<br />

A virtual community can be classified on the basis of its focused issues and its objective target (cp.<br />

Hagel III/Armstrong, 1997). Major examples for this are online health communities, whose members<br />

are affected by various disease patterns. In their ambition these communities are matching the goals of<br />

a self-help group (cp. Eysenbach et al., 2003; Leimeister, 2005; Maloney-Krichmar/Preece, 2002).<br />

Nevertheless, on the Internet one can also observe virtual communities, whose members are<br />

thematically engaged in the creation of innovations for different product categories. In scientific<br />

literature, such kind of virtual communities are referred to as Innovation Communities, Communities<br />

for Innovations or Communities of Innovation. In literature, there did not establish a unique taxonomy<br />

or definition for that phenomenon, yet.<br />

Aim and Research Method<br />

The purpose of the present article is to illustrate the different manifestations of such communities and<br />

to differentiate them from each other.<br />

In the following, we describe the research method used for this literature review. We first selected the<br />

scientific databases EBSCO and Science direct as the search space for our literature review. We then<br />

performed a title, abstracts, and keywords search within these databases using the keywords<br />

“innovation” and “community”. We did, however, limit search results to A-level publications as<br />

identified by the German Scientific Commission for Information Systems (WKWI, 2008). Next, the<br />

resulting body of 96 articles was filtered by reading their abstracts. Most of them could then be clearly<br />

classified as being unrelated to our research objective. The resulting ten articles where included in our<br />

review. To ensure that no relevant literature that did not use these keywords or has not been published<br />

in one of the selected journals has been omitted we also included articles that (a) were already known<br />

to the authors, (b) were referred in one of the relevant articles, or (c) could be found by looking in<br />

“close proximity” of articles that have initially been identified in this review (e.g., were published in


the same issues of a journal). Thus, we achieved a comprehensive overview of innovation<br />

communities’ literature.<br />

The paper is structured as follows. The next section reviews open source software communities<br />

followed by a review of innovation communities formed around physical goods. Then, the next<br />

section reviews business communities followed by a section on customer integration communities.<br />

Finally, innovation communities that are structured as market places are reviewed. Each of the<br />

sections addresses the aspects history, actors, and aims of the individual communities. A discussion<br />

concludes the article and presents an outlook on future work.<br />

Consumer Innovation Communities<br />

With currently more than 1.2 Billion users, the Internet is obviously an “enormous pool of knowledge,<br />

impossible to encounter elsewhere” (Füller /Matzler, 2007, p. 381). Thus, it is the most suitable<br />

medium to address companies’ customers and to access the creativity of individuals outside<br />

companies’ R&D departments. As a matter of fact, there are a lot of virtual communities, broaching<br />

different issues, discussing particular problems and collaboratively generating innovations. Literature<br />

shows, that the innovative potential of these virtual communities is highly valuable for companies,<br />

being able to access the “swarm creativity” (Gloor, 2005) of a huge amount of stakeholders.<br />

Besides several other ways of integrating external people into a company’s innovation process like<br />

mass customisation or toolkits (cp. von Hippel, 2006; Reichwald/Piller, 2006; Walcher 2007)),<br />

literature describes the Community Based Innovation Approach as a suitable method to include<br />

stakeholders into the innovation process (Füller, 2005; Füller et al., 2006). These communities are<br />

mostly initiated by companies in order to trigger the community members’ creativity along the<br />

company’s specific needs.<br />

A first class of innovation communities that we found in our literature review are communities that<br />

afford major innovations from the consumer product field. They are run completely by and for users.<br />

The most prominent example in this category is the open source phenomenon. In response to open<br />

source communities arose consumer innovation communities for other consumer products.<br />

One common distinction of innovation communities is between autonomous and sponsored<br />

communities proposed by West and O'Mahony (2008). A similar distinction is found by Reichwald<br />

and Piller who distinguish between customer driven and vendor driven communities<br />

(Reichwald/Piller, 2006, p. 184ff.). In their research West and O’Mahony distinguish innovation<br />

communities with regard to the (predominant) involvement of for-profit organisations. In their<br />

definition an autonomous community is one that “is presently independent of any one firm and<br />

community managed” (West/O’Mahony, 2008, p. 149). This means in particular that such a<br />

community operates under a governance mechanism that is outside the reach of authority that is<br />

embedded in employment relationships. Contributors to autonomous communities may be volunteers<br />

or paid by an organisation to do so, but the decision making process within the community remains<br />

autonomous of any one organisation.<br />

In sponsored communities on the other hand, long- and short-term activities are controlled by one or<br />

more corporate entities.<br />

Consumer Innovation Communities for Software: Open Source Communities<br />

Talking about Open Source Communities, it is first of all necessary to define the underlying terms.<br />

Open Source is widely used equally for free software (FS) and open source software (OSS).<br />

According to Scacchi (2007), both types of software differ for example in the licenses they are based<br />

29


on and to the context they refer to. Free software is mostly licensed under the GNU General Public<br />

License (GPL) whereas open source software in many cases uses other license types which possibly<br />

do not include aspects of free software as the Free Software Foundation (FSF) defines them.<br />

Furthermore, free software often refers to a kind of social movement, open source software rather to a<br />

software development methodology. In one sentence: “Free software is always available as OSS, but<br />

OSS is not always free software” (Scacchi, 2007, S. 459).<br />

In the following, the term F/OSS is used to refer to both kinds of Software. The term Open Source<br />

Community refers to communities which develop F/OSS in a collaborative manner.<br />

History<br />

Practice shows that Open Source Communities nowadays differ in terms of organisation, sourcing and<br />

business models. Watson et al. (2008) distinguish five (business) models regarding the history of<br />

software production including Open Source Communities as a relevant part: (1) Proprietary (2) Open<br />

Community (3) Corporate Distribution (4) Sponsored Open Source and (5) Second Generation Open<br />

Source (Watson et al., 2008, S. 42 f.).<br />

Proprietary Communities as well as Open Communities stem from the early days of software<br />

development. Proprietary software production applies to companies hiring developers and selling their<br />

software products to the customer keeping the source code secret. The open community model stems<br />

from these days as well, but builds on people freely exchanging code without commercial interests<br />

(von Hippel/von Krogh, 2003; Watson et al., 2008).<br />

Besides these formerly dominating types, later on, other forms of software development emerged<br />

which in many cases are based on consumer innovation communities:<br />

Corporate distribution refers to firms creating value and generating revenue by providing distribution<br />

methods and complementary services to highly sophisticated F/OSS software. Examples for corporate<br />

distribution are RedHat, SuSE and other Linux based products where distribution, support and<br />

knowledge are commercially provided by firms. Nevertheless, this business model rests upon<br />

software, which was originally developed by a F/OSS community dealing with the development of<br />

Linux.<br />

Sponsored Open Source applies to F/OSS projects being sponsored by companies or foundations.<br />

These projects often emerge from formally closed source projects where companies once opened up<br />

the source code to the community. Examples for Sponsored Open Source are the Apache Software<br />

Foundation Projects, the Ubuntu project or the Mozilla Foundation (Bärwolff/Gehring/Lutterbeck,<br />

2006).<br />

Second Generation Open Source (also OSSg2) is a hybrid of the former described models of<br />

Corporate Distribution and Sponsored OSS. OSSg2 companies generate most of their revenue by<br />

providing complementary services around their products. Furthermore, they provide most of the<br />

development and maintenance resources and in most cases control the source code of the products<br />

they freely provide. Familiar examples for this model are amongst others JBOss and MySQL.<br />

Similar to software development nowadays, which often takes place in a globally distributed setting<br />

(cp. for example near- and offshoring) also Open Source Communities are in need of organizational<br />

structures ant suitable tools to enable the creation of software in a collaborative manner by developers<br />

who are often spread all over the world. F/OSS developers in Open Source Communities are in<br />

general organised in terms of a virtual community, contrary to development teams of for example<br />

software firms. Usually, they use Internet-based tools for organisational tasks and code versioning.<br />

(Scacchi, 2007, p. 461 f.).<br />

30


Actors<br />

The survey of Lakhani/Wolf (2003) gives a detailed view on the people Open Source Communities<br />

consist of. In a nutshell, the majority of F/OSS developers have university-level education in computer<br />

science and information technology. Most participants of Open Source Communities are professional<br />

programmers, system administrators, IT managers, students and academic researchers (Lakhani/Wolf,<br />

2003). Furthermore, “FOSS developers are typically also end-users of the FOSS they develop, and<br />

other end-users often participate in and contribute to FOSSD efforts.” (Scacchi, 2007, p. 459)<br />

Aims<br />

One of the core questions regarding Open Source Communities is why do people join these<br />

communities and contribute their time and knowledge freely to the public developing Free or Open<br />

Source Software?<br />

Raymond (1999) and Scacchi (2007) mention three reasons of developers for contributing to open<br />

source communities: (1) Developers in open source communities often directly benefit from the<br />

software they develop. They are using the software themselves and thus are creating or modifying<br />

functionalities according to their own demands. (2) Members of open source communities contribute<br />

just because they enjoy implementing software. (3) Developers contribute in order to gain a good<br />

reputation developing sophisticated software and showing professional skills helps to make one’s<br />

mark. Thus, contributing may also end up in job offerings because of companies recognising a<br />

developer’s skills (Raymond, 1999; Scacchi, 2007). These findings were emphasised by web based<br />

surveys Lakhani/Wolf (2003) run in 2003, asking 684 developers in 287 F/OSS projects about their<br />

motivation to contribute their ideas and working time to these projects.<br />

Consumer Innovation Communities in the Consumer Product Field<br />

Comparable to the above mentioned open source example, in literature there are also consumer<br />

innovation communities that enable major innovations from the consumer product field. As open<br />

source communities they are run completely by and for users. These consumer innovation<br />

communities arose in response of the open source phenomenon and became popular in scientific<br />

literature at the beginning of 2000.<br />

For example, Franke and Shah (2003) examine how user-innovators of sport-related consumer product<br />

innovations gather information and assistance they need to develop their ideas and how they share and<br />

collaboratively develop the resulting innovations. Shah (2000) as well as von Hippel (2001), (2005) or<br />

Lüthje (2000) describes enthusiasts in sports like windsurfing, kite-surfing, or skateboarding jointly<br />

develop new or modify existing products and test their advanced products directly. Although in<br />

literature consumer innovation communities are described and investigated almost exclusively for<br />

sporting goods, in practice there are innovation communities from almost every product category<br />

(e.g., wines, cameras, and cars).<br />

Innovation communities for consumer goods and open source communities differ in two dimensions:<br />

the medium used (online vs. offline), and the product (intangible vs. tangible) that is created. In OSS<br />

projects, users collaboratively develop software on the Internet. As software consists of intangible<br />

code, software is easy to “produce” and distribute via the Internet. In this chapter mentioned<br />

communities operate in regular in face-to-face situations, as physical products can hardly neither be<br />

developed nor tested completely virtually.<br />

Nevertheless, virtual customer innovation communities as described here use the Internet for the<br />

development of tangible goods in a supportive manner. For example, in the virtual café “alt.coffee”,<br />

31


devoted coffee connoisseurs share their ideas and thoughts about how to improve coffee machines and<br />

bean roasters in order to enjoy the optimal coffee experience (Kozinets, 1999; Kozinets, 2002).<br />

Another popular example of an innovative online community is the “Harley-Owners-Group”<br />

www.hog.com. Members of this online community dedicated to Harley Davidson motorcycles discuss<br />

and demonstrate concepts for individualised motorbikes and accessories and the producer Harley<br />

Davidson later includes the users' ideas in the development process (McWilliam, 2000). In the study<br />

of Füller et al. (2007) four online communities are examined focussing on basketball shoes. Members<br />

of these communities share ideas for improving the design or other features.<br />

Business Innovation Communities<br />

There is another usage of the term “innovation community” in literature. From organisation theory it<br />

is known that different industrial organisations are involved in common innovation projects.<br />

Lynn/Reddy/Aram (1996) used the term “innovation community” in this context in their research and<br />

developed a framework for innovation communities from organisational ecology perspective.<br />

According to their understanding innovation communities consist of diverse populations of both substructural<br />

and super-structural organisations: producers, vendors, distributors, trade associations,<br />

professionals, and state agencies. Each population has a specific set of competencies and performs a<br />

specialised role. The exact configuration of populations and the role mix will differ among different<br />

innovation communities and within given communities over time. However, Lynn/Reddy/Aram’s<br />

innovation communities refer only to the process of commercialisation of innovations. So, they cover<br />

just a single phase of the innovation process. Furthermore, they do not operate solely via the Internet.<br />

Lynn/Reddy/Aram’s (1996) approach is the only example for the above defined community concept<br />

we found. Normally, the “community” concept does not encompass groups of (only) firms interacting<br />

with each other in the field of innovation development. In organisation theory, cooperating firms<br />

usually are described as a value chain, value network, or ecosystem.<br />

Innovation Broker Models<br />

A third class of innovation communities that stands between autonomous and sponsored communities<br />

are communities that are initiated and sustained by a brokering model.<br />

History<br />

Brokers play a central role in general commerce. A broker is a party that mediates between a buyer<br />

and a seller. Thus, it serves as a matchmaker and tries to bring supply and demand closer together.<br />

With the advent of the information age, brokers also engaged in mediating information goods. These<br />

brokers became known as “infomediaries”. Contrary to the believe that they might become<br />

superfluous with the new possibilities of direct sales over the Internet infomediaries increased in<br />

importance (Palvia/D’Aubeterre, 2007). Naturally, brokers also moved into the area of innovation,<br />

thus trying to bring parties looking for innovations closer to parties that might be able to provide these<br />

innovations.<br />

Actors<br />

In these networks brokers play a matchmaking role between “seekers” of solutions to a specific, well<br />

defined (scientific) problem and so called “solvers” of these problems. Usually seekers offer a<br />

(sometimes significant) amount of money for successful solution of a problem. Thus, they act as<br />

specialist intermediaries and innovation facilitator who’s goal is that of all market places: to bring<br />

supply and demand closer together. For their brokering and mediating activities brokers usually<br />

receive a fee.<br />

32


Seekers are commonly R&D intensive corporations, solvers are individual scientists or small research<br />

laboratories spread across the world. Solvers are usually individuals with certain knowledge and skills<br />

that work independently to solve problems posed by seekers. The motivation for solvers to participate<br />

in such a network are monetary benefits of solving a problem, enjoyment of the scientific challenges,<br />

and career perspectives by establishing themselves as successful scientists.<br />

Innovation facilitators allow their clients to source innovative ideas from outside their organisations.<br />

They allow seekers to easily access a large pool of potential solvers and thus to tab into global pool of<br />

scientists and technologists.<br />

One example of such a brokered innovation community is InnoCentive (Nambisan/Sawhney, 2008;<br />

www.innocentive.com). The same concept of brokered innovation communities like InnoCentive is<br />

also described by Birkinshaw/Bessant/Delbridge (2007) using the term “Idea Networks”.<br />

Aims<br />

The main aim of brokers is facilitating the discovery of solvers by the seekers of innovation. This is<br />

usually achieved by providing a marketplace for research and development activities.<br />

In addition to providing a marketplace and thus bringing supply and demand together, innovation<br />

brokers can also engage in transforming ideas. Hargadon and Sutton (2000) identified four tasks<br />

performed by brokers through which they aim at supporting others in managing innovations by<br />

supporting four knowledge brokering activities which form a cycle:<br />

1) capture good ideas<br />

2) keep ideas alive<br />

3) imagine new uses for old ideas<br />

4) put promising concepts to the test (Hargadon/Sutton 2000).<br />

Sawyney at al. (2003) analysed the role of brokers with regards to innovation support. They call these<br />

actors that engage in mediating innovation “innomediaries” and identified three different modes in<br />

which brokers can engage in building a community.<br />

• Customer network operator: The core function assumed by customer network<br />

operators are to create a network of customers and provide access to a specific<br />

segment of the customer base.<br />

• Customer community operator: These innomediaries build and operate online<br />

communities for specific interests, lifestyles, or around specific products.<br />

• Innovation marketplace operator: These brokers operate a market place where more<br />

than a single company engages in sourcing information.<br />

A summary of the broker roles is depicted in Figure 1.<br />

33


34<br />

Company<br />

Company<br />

Innomediary<br />

Innomediary<br />

C<br />

C<br />

C<br />

C C<br />

C C<br />

Buyer Seller<br />

Buyer<br />

Buyer<br />

Innomediary<br />

Seller<br />

Seller<br />

Customer network<br />

operator<br />

Customer community<br />

operator<br />

Innovation marketplace<br />

operator<br />

Figure 1 Three types of virtual knowledge brokers (Source: Sawhney et al. 2003).<br />

Innovation brokers allow companies to access innovation potential that lies outside their company<br />

boundaries. Contrary to open source or consumer product communities they are not formed<br />

autonomously but are formed by the broker. The broker plays the core role in these communities.<br />

With regards to business and customer integration communities, brokers allow the client companies<br />

(i.e., the seekers, or sellers) to increase their reach by aggregating a larger pool of solvers. This might,<br />

for example, be due to the fact that a brokered community serves more than one client company and is<br />

thus able to attract a larger pool of solvers and contributors. A brokered community may thus address<br />

a broader topic (i.e., “outdoor sports” in general rather than “outdoor sports equipment by company<br />

X”). Moreover, they allow faster extraction of innovations as communities are already in place and do<br />

not have to be built from scratch (i.e., a company can post an innovation challenge at InnoCentive and<br />

use the innovation potential of that community without engaging in community building).<br />

Conclusion and Outlook<br />

In this article we analysed literature on the various forms of virtual communities related to innovation.<br />

First, we analysed open source communities. This area has received very much attention by<br />

researchers over recent years. In particular, motivation schemes why users contribute to open source<br />

have been analysed. Research suggests that users are mainly motivated by their own demands, due to<br />

enjoyment of solving programming challenges and in order to gain reputation as a developer. Next,<br />

we summarised work on innovation communities and how they are used in a business environment to<br />

achieve cross-functional innovation development. Innovation communities for customer integration<br />

allow companies to overcome the closed innovation paradigm by integrating


Open<br />

Source<br />

communities<br />

Consumer<br />

products<br />

Business<br />

innovation<br />

communities<br />

Broker<br />

communities<br />

Table 2 Summary of innovation communities.<br />

Source Actor Subject Aims<br />

Scacchi (2007)<br />

Watson et al. (2008)<br />

von Hippel/von Krogh (2003)<br />

Bärwolff et al. (2006)<br />

Lakhani/Wolf (2003)<br />

Raymond (1999)<br />

Franke/ Shah (2003)<br />

Füller et al. (2007)<br />

von Hippel (2001), (2005)<br />

Lüthje (2000)<br />

Customers<br />

(professional<br />

programmers,<br />

system<br />

administrators, IT<br />

managers, students<br />

and academic<br />

researchers)<br />

Innovation and<br />

implementation<br />

of F/OSS<br />

software<br />

customers Innovation for<br />

customer<br />

products,<br />

especially from<br />

Lynn/Reddy/Aram (1996) Companies and<br />

their partners<br />

Birkinshaw/Bessant/Delbridg<br />

e, 2007<br />

Hargadon/Sutton, 2000<br />

Nambisan/Sawhney, 2008<br />

Sawyney at al., 2003<br />

Seekers<br />

Solvers<br />

Broker/community<br />

provider<br />

sporting<br />

Innovation that<br />

a ready for<br />

market launch<br />

Ideas, problem<br />

solutions, R&D<br />

tasks/solutions,<br />

patents.<br />

(1) Benefit from the<br />

software they<br />

develop<br />

(2) Joy of creating<br />

software<br />

(3) Reputation<br />

Supporting innovation<br />

activities in real world<br />

settings by exchanging<br />

ideas etc.<br />

Fostering diffusion and<br />

adoption of innovations<br />

1) Mediating between<br />

seekers and solvers<br />

2) Transformation of<br />

ideas


customers into their innovation process. Finally, brokered innovation communities have<br />

been discussed. These communities are characterised by the leading role a broker plays<br />

in brining “seekers” (organisations with research problems) and “solvers” (individuals<br />

that solve these research problems) together. Here, brokers play a matchmaking role and<br />

allow organisations to tab into a much larger pool of potential innovators than would<br />

otherwise be possible. Table 2 summarises our literature review.<br />

Our research summary shows what types of innovation communities exist, what actors<br />

are involved in the respective communities, and how and on what subjects they operate.<br />

A general distinction between sponsored and autonomous communities can be drawn.<br />

Although overlapping in certain parts, this distinction provides a useful framework for<br />

analysing how a community formed and how the innovations developed within this<br />

community are used for service and product development. Figure 2 provides a graphical<br />

overview of the different manifestations of innovation communities.<br />

36<br />

Figure 2 Topic map of different research streams investigating innovation communities.<br />

One limitation of our work is the pure focus on existing literature. Some of the emerging<br />

innovation community phenomenon are so new that they have not been covered in<br />

published research. A more exploratory and empirical approach that would analyse<br />

communities that can currently be found on the web would remedy this limitation. This<br />

research could, for example, derive a set of properties and build a taxonomy of<br />

innovation communities. Further research would also be required to analyse the<br />

innovation capabilities of each of the different community types and which type is best<br />

suited for what innovation challenges.<br />

We believe, however, that this literature review of innovation communities is a first step<br />

in organising our understanding of innovation communities and helps to delineate the<br />

different approaches to guide future research.<br />

References<br />

Bärwolff, M., Gehring, R. A., & Lutterbeck, B. (2006). Open Source Jahrbuch 2006:<br />

Zwischen Softwareentwicklung und Gesellschaftsmodell, 1st edition, Lehmanns.


Blanchard, A. L.; Markus, M. L. (2004). The Experienced "Sense" of a Virtual<br />

Community: Characteristics and Processes. The Data Base for Advanced in<br />

Information Systems, 35(1), pp. 64-71.<br />

Chesbrough, H. (2003). The Era of Open Innovation. MIT Sloan Management Review,<br />

3(44), 35-41.<br />

Eysenbach, G.; Powell, J.; Englesakis, M.; Rizo, C.; Stern, A. (2004). Health related<br />

virtual Communities and electronic support groups: systematic review of the effects<br />

of online peer to peer interactions, in: British Medical Journal, Vol. 328, S. 1166-<br />

1172.<br />

Franke, N. and Shah, S. (2003). How communities support innovative activities: an<br />

exploration of assistance and sharing among end-users, in: Research Policy 32 (2003)<br />

157–178.<br />

Füller, J. (2005, März 21). Community-Based-Innovation - eine Methode zur<br />

Einbindung von Online-Communities in den Innovationsprozess.<br />

Füller, J., Bartl, M., Ernst, H., & Mühlbacher, H. (2006). Community based innovation:<br />

How to integrate members of virtual communities into new product development.<br />

Electronic Commerce Research, 6(1), 57-73.<br />

Füller, J., Jawecki, G. and Mühlbacher, H. (2007). Innovation Creation by online<br />

Basketball communities, in: Journal of Business Research Volume 60, Issue 1, 60-<br />

71<br />

Füller, J., & Matzler, K. (2007). Virtual product experience and customer participation -<br />

A chance for customer-centred, really new products. Technovation, 27(6/7), 378-<br />

387.<br />

Gloor, P. A. (2005). Swarm Creativity: Competitive Advantage through Collaborative<br />

Innovation Networks (S. 224). Oxford University Press, USA.<br />

Hagel III, J.; Armstrong, A. (1997): Net Gain: Expanding Markets through virtual<br />

Communities. Boston, MA Harvard Business School Press.<br />

Kozinets, R. (1999). E-tribalized marketing? The strategic implications of virtual<br />

communities of consumption, in: Eur Manag J 17(3), pp. 252–264.<br />

Kozinets, R. (2002). The field behind the screen: using netnography for marketing<br />

research in online communications, J Mark Res 39(1), pp. 61–72.<br />

Lakhani, K., & Wolf, R. G. (2003). Why Hackers Do What They Do: Understanding<br />

Motivation and Effort in Free/Open Source Software Projects. SSRN eLibrary.<br />

37


Leimeister, J.M. (2005): Virtuelle Communities <strong>für</strong> Patienten: Bedarfsgerechte<br />

Entwicklung, Einführung und Betrieb, Wiesbaden.<br />

Lüthje, C. (2000). Characteristics of innovating users in a consumer goods field.<br />

Working Paper, University of Hamburg, Hamburg.<br />

Lynn, L. H., Reddy, N. M. and Aram, J. D. (1996). Linking technology and institutions:<br />

the innovation community framework, in: Research Policy, 25(1), pp. 91–106.<br />

Maloney-Krichmar, D., and Preece, J. (2002). The Meaning of an Online Health<br />

Community in the Lives of Its Members: Roles, Relationships, and Group<br />

Dynamics, in: Proceedings of the International Symposium on Technology and<br />

Society (ISTAS’02), Raleigh.<br />

McWilliam, G. (2000). Building strong brands through online communities, Sloan<br />

Manage Rev 41(13).<br />

Nambisan, S. & Sawhney, M. (2008). The Global Brain: Your Roadmap for Innovating<br />

Faster and Smarter in a Networked World Research-Technology Management,<br />

Warton School Publishing.<br />

Palvia & D’Aubeterre EXAMINATION OF INFOMEDIARY ROLES IN B2C E-<br />

COMMERCE Journal of Electronic Commerce Research, 2007, 8, 207-220.<br />

Preece, J. (2000). Online Communities: Designing Usability, Supporting Sociability,<br />

Chicheser, UK.<br />

Raymond, E. S. (1999). The Cathedral and the Bazaar: Musings on Linux and Open<br />

Source by an (S. 268). O'Reilly.<br />

Rheingold, H. (1993): The virtual Community. Homestanding on the Electronic Frontier.<br />

Reading, MA: Addison-Wesley Publishing.<br />

Reichwald, R. & Piller, F. (2006). Interaktive Wertschöpfung Gabler Verlag.<br />

Sawhney, M.; Prandelli, E. & Verona, G. (2003). The Power of Innomediation, in: MIT<br />

Sloan Management Review, 44, p77 – 82.<br />

Scacchi, W. (2007). Free/open source software development. In Proceedings of the the<br />

6th joint meeting of the European software engineering conference and the ACM<br />

SIGSOFT symposium on The foundations of software engineering, pp. 459-468,<br />

Dubrovnik, Croatia: ACM.<br />

Shah, S., (2000). Sources and patterns of innovation in a consumer products field:<br />

innovations in sporting equipment. Sloan Working Paper #4105.<br />

von Hippel, E. (2001). Innovation by user communities: learning from open-source<br />

software, MIT Sloan Manag Rev 42(4), pp. 82–86.<br />

38


von Hippel, E. (2005). Democratizing innovation, MIT Press, Cambridge, MA.<br />

von Hippel, E., & von Krogh, G. (2003). Open Source Software and the 'Private-<br />

Collective' Innovation Model: Issues for Organization Science. Organization<br />

Science, 14(2), 209-223.<br />

Walcher, P. (2007). Der Ideenwettbewerb als Methode der aktiven Kundenintegration (1.<br />

Aufl.). Wiesbaden: Deutscher Universitäts-Verlag.<br />

Watson, R. T., Boudreau, M., York, P. T., Greiner, M. E., & Wynn Jr., D. (2008). The<br />

Business of OPEN SOURCE. Communications of the ACM, 51(4), 41-46.<br />

West, J. & O'Mahony, S. (2008). The Role of Participation Architecture in Growing<br />

Sponsored Open Source Communities Industry & Innovation, Routledge, 15, pp.<br />

145-168.<br />

<strong>Wirtschaftsinformatik</strong> im Verband der Hochschullehrer <strong>für</strong> Betriebswirtschaft e.V.<br />

(WKWI), WI-Orientierungslisten, <strong>Wirtschaftsinformatik</strong>, 2, 2008, pp. 155-163.<br />

39


Developing Hybrid Products: Integrating Products and Services<br />

Sebastian Esch, Jens Fähling, Felix Köbler, Philipp Langer<br />

Literature Review<br />

Technische Universität München<br />

<strong>Lehrstuhl</strong> <strong>für</strong> <strong>Wirtschaftsinformatik</strong> (I 17)<br />

Boltzmannstr. 3 – 85748 Garching<br />

"It has been said: The whole is more than the sum of its parts. It is more correct to say that the whole<br />

is something else than the sum of its parts, because summing up is a meaningless procedure, whereas<br />

the whole-part relationship is meaningful."<br />

(Koffka 1935)<br />

Abstract<br />

Hybrid products, integrated combinations of products and services, offer new ways to differentiate<br />

from the competition and offer the customer solutions to specific problems. As hybrid products are a<br />

young field of information systems (IS) research, this paper gives an overview over the current state<br />

of the definitions and characteristics of hybrid products, the benefits and barriers adopting hybrid<br />

products and analyzes different development approaches for such hybrid products. Our analysis shows<br />

that there are several open questions regarding the role of IS in the context of hybrid products and the<br />

integrated development of products, software and services.<br />

Introduction<br />

Product manufacturers see themselves confronted with growing competition. Simple products are<br />

easily copied by competition from countries with low labor costs. Even complex products are no<br />

longer a guarantee that they can be copied or imitated (Leimeister and Glauner 2008). Another aspect<br />

influencing the integration of products and services is the fact that customers are not interested in the<br />

product and service itself, but rather want a problem to be solved (Sawhney 2006). The integration of<br />

products and services offers demanding customers a solution tailored to fit their specific problem.<br />

Products or services offered in isolation often address only parts of customer problems.<br />

Different disciplines research the potentials and challenges that arise from the integration of products<br />

and services, such as mechanical engineering, service engineering and business management, IS<br />

research and software engineering.<br />

Mechanical engineering research has a strong understanding and long history of developing and<br />

managing technical products for various markets (Lindemann 2006). Service engineering or service<br />

science research is a relatively new discipline focusing on the management and development of<br />

services (Bullinger and Scheer 2003). Management research offers methods and tools for the<br />

41


management of complex enterprises and business networks. Every discipline provides methods,<br />

models and tools that fit their specific aspect of products, services and companies that provide either<br />

or both. But it is not clear, if those means are appropriate to meet the challenges in managing and<br />

developing tightly integrated products and services - so called hybrid products (Leimeister and<br />

Glauner 2008).<br />

The fact that many combinations of products and services require IS to link them together, the<br />

discipline of IS research offers the possibility to connect the different disciplines and provide insight<br />

into the management and development of such hybrid products (Leimeister and Glauner 2008).<br />

This paper aims to present the current state of the art in the research of integrating products and<br />

services. Therefore all issues of the last five years of the journals with a rating of A from the current<br />

journal ranking list in the German IS community were searched for specific keywords related to the<br />

integration of products and services. The following keywords were used in German journals:<br />

“hybrides Produkt” “hybride Leistung” “hybride Wertschöpfung”, “Verbundsystem”,<br />

“Leistungssystem”, “Leistungsbündel”, “komplexe Lösung”, “Bündelung”,<br />

“Dienstleistungsentwicklung”, “Sach- und Dienstleistungsbündel”, „Produkt-Dienstleistungs-Bündel“.<br />

Regarding journals published in English language, the keywords used, write out to: “hybrid product“,<br />

“service engineering“, “product service system“, “product service systems engineering“, “pss“, “pssystem“,<br />

“compacks“, “complex packages“, “hybrid value-creation“ and “bundling“. In the next step<br />

we analyzed the title and abstract of the articles we found and excluded those that were clearly outside<br />

the scope of hybrid products. As the number of matching articles was very low, we extended our<br />

review into other fields, like mechanical engineering and other publications in IS research, service<br />

engineering and business management.<br />

We then categorized all articles into the areas: “definition and types of hybrid products”,<br />

“development”, “management”, “state-of-the-art”, “value of hybrid products”, “business<br />

environment” and “value of hybrid products”.<br />

The first section of this paper presents the different terms for product service bundles and their<br />

definitions. The following sections discuss benefits and barriers of the integration of products and<br />

services, followed by a selected example, representing the applied integration of products and<br />

services. After that, different development approaches for hybrid products are compared. In the<br />

conclusion, various interesting questions for future research are presented.<br />

Definitions<br />

Two different streams characterize the research area of product service systems and hybrid products.<br />

Terms like “Leistungsbündel”, “Compack (complex package)”, ”Leistungssystem” and<br />

„Verbundsystem“ (Botta 2007) are commonly used in management research. In German management<br />

and information system research the terms “Lösungen”, “Hybride Produkte” or “Hybride<br />

Wertschöpfung” ((Leimeister and Glauner 2008); (Burianek, Ihl et al. 2007; Ernst 2007)) are well<br />

established because the terminologies are coined in the research project funded by the BMBF named<br />

“Innovation mit Dienstleistungen” and “Rahmenkonzept Forschung <strong>für</strong> die Produktion von morgen”<br />

(Thomas, Walter et al. 2008). In contrast, engineering research created the term “Product Service<br />

System (PSS)” ((Goedkoop, Halen et al. 1999; Baines, Lightfoot et al. 2007; Botta 2007)).<br />

While management research underlines the aspects of differentiation and output expansion, many<br />

researchers on PSS focus more on economical aspects. According to the state-of-the-art paper about<br />

product service systems from (Baines, Lightfoot et al. 2007), the first formal definition on PSS came<br />

from Goedkoop et al. (1999) in their final report on the project named “Product Service Systems,<br />

42


Ecological and Economic Basics”, that was commissioned by the Dutch ministries of Environment<br />

(VROM) and Economic Affairs (EZ) in the late 1990s:<br />

“A product service-system is a system of products, services, networks of ‘players’ and<br />

supporting infrastructure that continuously strives to be competitive, satisfy customer needs<br />

and have a lower environmental impact than traditional business models.“ (Goedkoop, Halen<br />

et al. 1999)<br />

The project was based on the following hypothesis that emphasizes the multidimensional<br />

considerations of the PSS research:<br />

“We have to find ways to increase the perceived value of all transactions without increasing<br />

the environmental load of products involved. The solution could be to dematerialise<br />

economy. One strategy for this seems a shift from an economy based on production and<br />

consumption of physical products to a services-based economy.” (Goedkoop, Halen et al.<br />

1999)<br />

Based on this definition their exploration considered four viewpoints for product service system<br />

assessment (Goedkoop, Halen et al. 1999) economical aspects, ecological aspects, the company’s<br />

identity and strategy, and the market acceptance.<br />

As the German research community is coined by the funded research projects about hybrid products<br />

like stated before, the following definition for hybrid products or solutions was developed in this<br />

context:<br />

“A hybrid product is a bundle of efforts, that represents an aligned combination of product(s)<br />

and services(s), and that is adjusted to the individual customer needs.” (Translated from<br />

German) (Burianek, Ihl et al. 2007)<br />

An outline of several definitions of hybrid products and solutions can be found in Leimeister and<br />

Glauner (Leimeister and Glauner 2008) and Schmitz (Schmitz 2008). Most considered aspects in these<br />

definitions of hybrid products are<br />

• the combination of products and services,<br />

• the added value because of this combination (the solution is more than the sum of its parts) as<br />

well as<br />

• the adjustment of the hybrid product with the individual customer needs and additional value.<br />

In this publication, we use the terms of Product Service System or PSS synonymously to Hybrid<br />

Products or all the other terms stated before, because we focus on the common basis of both streams:<br />

the combination and integration of products and services to a customer’s solution.<br />

Characteristics of hybrid products<br />

Mont (2002) identifies a “clear lack of common understanding of PSS elements” and gives an<br />

overview on elements of PSSs. First of all, a PSS may consist of products and services in various<br />

combinations. Other elements include services at the point of sale like personal assistance in shops,<br />

financial schemes or marketing. Added to that, different concepts of product use, named product-, use-<br />

or result-oriented (Tukker 2004) are an element of a PSS. Maintenance services with the goal of<br />

prolonging product life cycles are the forth element. The last elements are revalorization services that<br />

43


close the product material cycle by taking products back and utilization of usable parts in new<br />

products and recycle of materials. Tukker provides a summary of various PSS classifications. He<br />

identified three main categories of PSSs that are used primarily (Tukker 2004).<br />

44<br />

Figure 1: Main and subcategories of PSS (adapted from (Tukker 2004))<br />

The first main category, product-oriented services, contains business models that are dominated by<br />

product sales and some extra additional services. The second main category contains use-oriented<br />

services. Thereby, the product still plays a central role but in contrast to product-oriented services,<br />

however, the products’ ownership remains with the provider. This enables the provider to make the<br />

product available in different forms and to various customers. Finally, result-oriented services<br />

represent the third main category. Thereby, no pre-determined product is involved, because the<br />

customer and provider agree on a specific result. Tukker mentions that PSSs in these main categories<br />

are still very different with respect to their economical and environmental characteristics.<br />

Product-oriented services can be distinguished in product-related services as well as advice and<br />

consultancy (Tukker 2004). Product-related services include the sale of products with additional<br />

services that are needed during the use phase of the product, like maintenance contracting, financing<br />

schemes or the supply of consumables. In contrast, advice and consultancy include hints and<br />

directions for the most efficient use, like organizational structure or optimizing the logistics, in<br />

comparison to only sell a product.<br />

Use-oriented services are product leasing, product renting or sharing as well as product pooling<br />

(Tukker 2004). Because there is no transfer of the ownership from the provider to customer at product<br />

leasing, the provider is also often responsible for maintenance, repair and control. The customer has<br />

normally unlimited and individual access to the leased product. Product renting or sharing is quite<br />

similar to product leasing, but differs by the access to the product. Unlimited and individual access is<br />

normally not found in product renting and sharing. Here, the customer has to share access with other<br />

customers, whereby the product is sequentially used by different users. Furthermore, product pooling<br />

permits simultaneous use of the product.<br />

Finally, result-oriented services can be distinguished in activity management and outsourcing, pay per<br />

service unit and the agreement on a functional result (Tukker 2004). Activity management and<br />

outsourcing means, that a provider delivers parts of activities or whole processes for a customer, like<br />

office cleaning or catering. These services are assigned to this category, because most of these<br />

contracts contain performance indicators for quality control of the outsourced services. Another<br />

standard PSS category is pay per service unit. Here, the provider takes over all activities that are<br />

needed to keep the product and services available and the customer only pays for the output of the


product. An example is a pay-per-copy unit function in offices that is already included in common<br />

portfolios of most copier producers. Functional result is similar to activity management and<br />

outsourcing, but adds technological independence. Tukker uses the example, that a provider does not<br />

sell gas or cooling equipment anymore but “pleasant climate” (Tukker 2004). Thereby, the customer is<br />

only interested in a specific climate and does not care what kind of technological system is applied.<br />

Another approach for systemization of hybrid products is suggested by Burianek, who built up a fine<br />

granular classification with seven categories to estimate the complexity of hybrid products: type of<br />

customer value, scope of the output, number and heterogeneity of the subservices, degree of technical<br />

integration, degree of integration into the customer’s value chain, degree of individualization, and<br />

finally temporal dynamics and variability of the service provision (Burianek, Ihl et al. 2007).<br />

The following section will introduce requirements for the development of hybrid products and<br />

compare three methods found in literature to support the lifecycle management of hybrid products.<br />

Benefits of hybrid products<br />

The new trend of product-service systems promises potential benefits that range from economical<br />

success for manufacturing and service providing companies to environmental and social gains. The<br />

potential derives from changes in production and consumption patterns that might accelerate the<br />

adjustment towards more sustainable practices and societies (Mont 2002). Thus the concept is a<br />

special case of “servization” (Baines, Lightfoot et al. 2007) that extends the traditional functionality of<br />

a product by incorporating additional services by enforcing multiple stakeholders to jointly provide<br />

processes that plan, design, develop, deliver and maintain integrated combinations of products and<br />

services to embrace competitive strategies, environmental sustainability and differentiate from<br />

competitors (Baines, Lightfoot et al. 2007). Stakeholders of such processes can be manufacturing and<br />

services companies. However the consumers play a more and more important twofold part, in<br />

delivering and integrating knowledge into plan and design processes of product service systems and in<br />

benefitting from highly aligned solutions. Furthermore, scholars see a macro-level potential for<br />

governments, society and the environment. Consequently, benefits are grouped and described aligned<br />

to stakeholders and beneficially affected macro-level entities; consistent to literature. Following<br />

groups can be outlined: companies (in general), manufacturing and service companies, consumers,<br />

governments, society and the environment.<br />

Benefits for companies in general<br />

Product service systems hold a variety of general beneficial impacts on companies, whereas one<br />

company might be affected more than other companies, seeing the application as a centre of a new<br />

business case and a survival strategy while the latter companies implement product service systems as<br />

a natural extension of existing offers to customers and/or consumers (Mont 2002). Due to necessary<br />

intensive collaboration between a providing company and consumers chances arise to develop durable<br />

relationships that foster mutual trust and commitment on the organizational and personal level<br />

(Schmitz 2008). This new relationship between companies and costumers/consumers cause an<br />

efficient overcoming of existent information and incentive problems (Schmitz 2008), but also might<br />

lead to possible costs. For Miller et al. (2002), product service system solutions must also deliver<br />

benefits in form of expanded margins and volumes, stabilized revenues, differentiation from<br />

competitors, and cross-selling opportunities. Further specific benefits arise for manufacturing and<br />

service companies.<br />

45


Benefits for manufacturing companies<br />

For traditional manufactures the concept is claimed to provide strategic market opportunities (Baines,<br />

Lightfoot et al. 2007) and display alternative ways to standardization and mass (customized)<br />

production. The above mentioned influence of consumer decision making on plan and design phases<br />

of the product life cycle exhibits alternative growth strategies in mature industries. In general, Mont<br />

(2002) assumes an improvement of the total value for the costumer generated by increased servicing<br />

and service components that enhance life cycles and functionalities of products. For a number of<br />

scholars, product life cycles are expanded beyond the point of sale, making the entire product and/or<br />

its components valuable through recycling processes or reuse, whereas other authors (e.g., (Mont<br />

2002)) even see potential in take-back legislations turned into competitive advantages.<br />

Benefits for service companies<br />

Mont (2002) describes the increase of product component implementations for service companies as a<br />

benefit in safeguarding market shares and certain quality levels. The former is realized by<br />

implementing service components into the offer to exacerbate imitating and copying attempts of<br />

competitors, while the latter needs to be established to assure a resistant high-level product quality.<br />

Product components extend and diversify services and initiate a “materialization” of intangible<br />

services to tangible products that are assumed to be superiorly communicable to costumers/consumers<br />

in marketing and advertisement campaigns.<br />

For consumers<br />

Miller et al. (2002) combine potential consumer benefits to superior or simplified operations, cost<br />

savings, performance guarantees, convenience, customized service, and state-of-the-art offerings.<br />

Mont (2002) further lists the “greater diversity of choice in the market, maintenance and repair<br />

services offerings, various payment schemes and the prospect to different schemes of product use that<br />

suit consumers best in terms of ownership responsibilities”. The “servization” of products is assumed<br />

to constitute an increased customization and flexibility, whereas the “productization” of services leads<br />

to a deeper understanding of product service systems by costumers/consumers, as companies<br />

superiorly market tangible products than intangible services. Continuative costumers/consumers even<br />

learn about environmental features of products fostering new ways of contributions in minimizing the<br />

environmental impacts of consumption. Furthermore, Mont (2002) proclaims that consumers are<br />

disburdened from the responsibility for a product that stays under ownership of a producer for its<br />

entire product life span.<br />

Benefits for the environment<br />

A vast amount of literature emerging from European countries describe major benefits for the<br />

environment, as adoption of product-service systems might lead to reductions in used resources and<br />

generated waste materials since fewer products are manufactured using fewer materials per use<br />

(Baines, Lightfoot et al. 2007). This assumption is based on an increased level of responsibility of<br />

producing entities that close material cycles through take-back processes and newly introduced<br />

alternative scenarios of product use, such as sharing, renting, and leasing schemes to<br />

costumers/consumers (Mont 2002). Furthermore, scholars of this research stream predict an<br />

incremental “dematerialization” of products which decrements material flows in production and<br />

consumption, by “creating product and services that provide consumers with the same level of<br />

46


performance, but with an inherently lower environmental burden” (Mont 2002). However, assumed<br />

environmental benefits of improved technologies and offering schemes might be counteracted by<br />

increased consumption (Goedkoop, Halen et al. 1999). This phenomenon is usually referred to as the<br />

“rebound effect” (Goedkoop, Halen et al. 1999) which indicates that up streamed consumption might<br />

counteract the efficiency benefits gained elsewhere. According to Goedkopp et al. (Goedkoop, Halen<br />

et al. 1999) two factors can be distinguished that contribute to this effect: First, “general economic<br />

theory predicts that the longer a product exists, the more items are produced. The product gets less<br />

scarce. Often, the number of competing producers will increase. Additionally production costs will<br />

decrease as result of economy of scale. Consequently, the longer a product exists the cheaper it<br />

becomes and more people will buy and use it” (Goedkoop, Halen et al. 1999). This element is closely<br />

linked to the second effect, which states that “improvements that make products and services more<br />

effective (e.g. energy consumption) will generally lower purchasing cost or operational costs. The<br />

money a consumer saves will be spent elsewhere and reduced costs will attract new applications”<br />

(Goedkoop, Halen et al. 1999). Solutions that address this problem need to include both production<br />

and consumption patterns whereupon the focus has to shift to consumption patterns to adjust to<br />

consumer’s needs.<br />

Benefits for governments and society<br />

A minor set of literature also describes possible benefits for governments and society. As mentioned<br />

above, product-service systems might initiate a general rethinking which leads to a “new way of<br />

understanding and influencing stakeholder relationships and viewing product networks which may<br />

facilitated development of more efficient policies” (Mont 2002). A number of scholars (Mont 2002;<br />

Baines, Lightfoot et al. 2007) assume positive effects on job creation, as “a functional economy might<br />

be more labor-intensive than an economy based on mass production”. This results in more jobs per<br />

unit of material as take back systems such as repair, refurbishment or disassembly services expand the<br />

product life cycle. However, one might argue that services offered as large-scale operations effect an<br />

automatization which then may decrease employment (Mont 2002).<br />

Barriers of implementing hybrid products<br />

In the scope of this contribution the authors chose an approach to group barriers on a macro- and<br />

micro-level, whereas the latter will be discussed according to specific phases of the product life cycle.<br />

Two barriers exist on a macro-level. Scholars see a need for a social system or infrastructure that<br />

“would accept or support the suggested product-service systems”(Mont 2002). Baines et al. (2007)<br />

postulate a necessary “cultural shift”, especially in consumer minds to “place value on having a need<br />

met as opposed to owning a product”. “Numerous examples of applications of product-service ideas in<br />

the commercial sector did not facilitate operationalism in the private market” (Mont 2002) as<br />

customers fail to be enthusiastic about ownerless consumption and provide a readiness to accept the<br />

new concept. Parallel this reorientation needs to be reflected in the cooperate culture, market<br />

engagement of companies from product to service sales and changing traditional marketing concepts.<br />

It can be assumed that both shifts require time and resources to discard psychological barriers. Mont<br />

(2002), for example, even sees a barrier in the variety of regulatory frameworks in different countries<br />

and missing national and international reporting standards on product-service systems.<br />

On the micro-level, companies face multiple barriers along the product life cycle. Companies are in<br />

the need to built “competence of learning and working with the customer” (Ganz 2006), especially in<br />

developing and advancing competencies of interaction with the customer/consumer (Ganz 2006);<br />

(Miller, Hope et al. 2002; Mont 2002; Schmitz 2008). In addition, close cooperation with suppliers<br />

47


and service producers need to be provided while parallel increasing qualifications of employees<br />

(Meier, Kortmann et al. 2006). In the extreme these new forms of cooperation lead to value networks<br />

that need to be built and managed. Problems may arise out of trade-offs between co-operation and<br />

internal environmental management, information sharing, transparency and barriers in material flows,<br />

according to Mont (2002). The above mentioned possible environmental benefits product service<br />

systems might introduce are seen as a reason lengthening the time to market, as environmental<br />

considerations need to be incorporate into the plan and design process. Another barrier found is the<br />

difficulty to “develop scenarios of alternative product use because the often include elements that are<br />

situated between production and consumptions” (Mont 2002). A clear distinction between processes is<br />

therefore necessary. Baines et al. (2007) see three major barriers in the pricing structures of productservice<br />

systems, firstly through the lack of experience of pricing of such offerings (Mont 2002),<br />

secondly through the risks that fall back to the provider which were previously addressed by the<br />

costumer (e.g. guarantee) and thirdly through the missing experience in structuring organizations to<br />

organizational product-service requirements ((Ganz 2006; Meier, Kortmann et al. 2006)). Discussions<br />

on the latter can be found in Galbraith (2002) while Bonnemaier et al. (2007) look into the subject of<br />

pricing. The pricing problem is based on the “changeover from short-term profit realization at the<br />

point-of-sale to medium- and long-term amortization periods at the point-of-service” (Mont 2002).<br />

This factor is in line with the resistance of companies to extend their involvement with a product<br />

beyond the point-of-sale (Mont 2002; Baines, Lightfoot et al. 2007).<br />

Example: Off-grid Lighting provided by Osram<br />

In today’s world about 1.6 billion people depend on fuel-based lighting because they do have limited<br />

or no access to the electric grid, usually burning kerosene for light in different types of lanterns is the<br />

only alternative. Based on an approximation by Osram GmbH (Osram 2008) in total 77 billion liters<br />

of kerosene are burned every year for lighting, resulting in emissions of 190 Million tons of CO2 per<br />

year. This application is not only very inefficient, but expensive, dangerous and poses a health hazard<br />

for its users. Its main advantage is that it can be portioned and sold in small amounts, aligned to small<br />

and irregular incomes. The company has recognized the problem and is providing a viable solution,<br />

which is described in short: The heart of the off-grid solution is an energy hub which consists of<br />

photovoltaic panels providing CO2-free energy and with an approximately 10 kW peak power. The<br />

second function is a recharging battery unit, which is comparable to battery charging services<br />

available in off-grid areas for some time now. The lightning products are continually exchanged for<br />

the costumer to ensure maintenance and quality control which in turn leads to increased long service<br />

life and recycling responsibilities of the hub provider. In a nutshell, the “keypoints of the system are<br />

(Osram 2008):<br />

• Efficient lamps or lighting systems using energy-saving compact fluorescent lamps and LEDs are<br />

powered by rechargeable batteries.<br />

• Customers return the standardized system to the hub and get a freshly charged system in<br />

exchange. The customer pays only for the energy, echoing a great advantage of kerosene - light<br />

can be bought in small portions according to available income. The 'deposit' for the system is<br />

taken care of with microfinancing”.<br />

Furthermore, the company soon realized that the hubs, which were originally designed to provide only<br />

lightning infrastructure, could be beneficial to other services. The company installed integrated units<br />

for water purification using special UVC-lamps. Thus it can be seen that the company is taking a<br />

service- rather than a product-focused approach to provide off-grid solutions. On the background of<br />

the above conducted summarization of benefits evolving from product-service system, three major<br />

48


enefits can be outlined: the social situation of the customers undergoes major improvement by<br />

reducing expenditure and reducing health hazards; environmental benefits are displayed in the shift<br />

from CO2 emissions to solar energy and new business opportunities are enabled for Osram and its<br />

partners. Achim Steiner, UN Under-Secretary General and UNEP Executive Director, can be quoted<br />

as follows “the innovation is not only providing an economic benefit for fishermen but a underlining<br />

that small-scale solutions can be the fastest and most eco-friendly way of transforming the energy<br />

needs of local communities.” (Osram 2008)<br />

For a manufacturing company like Osram the introduction of off-grid solutions is a major expansion<br />

of traditional product-focused business models.<br />

In the following section we compare different methods that describe the development of hybrid<br />

products.<br />

Development of hybrid products – a comparison of different methods<br />

Requirements for the development of hybrid products<br />

As stated above, one of the main challenges of hybrid product development is to consider customer<br />

individualization, which is often realized through integrating the hybrid product into the business<br />

processes of the customer. This adaption requires a transformation from a transactional to a relational<br />

relationship between vendor and customer (Galbraith 2002).<br />

At the same time, standardization of non-differentiating parts of the hybrid product is supposed to be<br />

obligatory for saving it’s profitability (Miller, Hope et al. 2002) (Sawhney 2006). Using the principle<br />

of productizing customer specific solutions is a possible way to identify these repeatable and scalable<br />

parts (Sawhney 2006).<br />

The main difference to usual development processes is that the existing knowledge gained from<br />

former solution projects is transferred into repeatable solution bricks. Therefore an important<br />

requirement for the proposed productization is a systematic analysis of solution projects.<br />

In the following section, these challenges are transformed into requirements for developing hybrid<br />

products, such that customer individualization is considered. These identified requirements are then<br />

used to compare different developing methods for hybrid products.<br />

The most important requirement derived from these challenges is the reduction of complexity. This<br />

reduction is reached through two strategies: Standardization and Bundling of performance parts.<br />

Experiences in the field of products have shown that standardization at the stage of partial<br />

performance facilitates the combinatorial flexibility to fulfill customer specific requirements (Baldwin<br />

and Clark 2000). Therefore systematic decomposition of solutions into its parts, products and services<br />

is requirement 1 (R1) (see Table 1).<br />

Following the decomposition of reusable parts, the systematic bundling of these parts to modules is<br />

the next logical step and requirement 2 (R2). Systematic bundling means that the recombination of<br />

parts to modules must is conducted by the following rules. First, only parts that depend on each other<br />

in a strong matter are selected and combined in common modules. Second, only a loose coupling<br />

between these modules is allowed to maximize flexibility.<br />

49


Hybrid products are predestined for standardization and reuse, because products and its’ services are<br />

narrowly aligned to each other. This means that in the process of recombination, products and services<br />

need to be integrated. Therefore an integrated view is required (R3).<br />

R3 is connected closely to requirement 4 (R4). In order to be able to develop integrated products and<br />

services, the process of communication between different engineering departments (product and<br />

service department) needs to be supported (Spath 2005).<br />

Another important feature derived from services is the integration of the external factor into<br />

performance delivery, which is recognized to be contradictory to standardization (Kleinaltenkamp<br />

2001). Therefore, standardization of integrated modules requires a systematic documentation and<br />

encapsulation of customer individual partial performances (R5).<br />

These first four requirements describe mainly the vendor view. Following the argumentation of Tuli,<br />

there is also a customer view (Tuli, Kohli et al. 2005). In order to provide additional value for the<br />

customer, the hybrid product must be configured customer individually during the bid process (R6).<br />

Therefore all dependencies between modules and parts must be evident to the vendor, so that the<br />

consultant in charge is able to choose the right module combination during the bid process. This also<br />

allows the vendor to concentrate on the differentiating aspects of the customer individual hybrid<br />

product.<br />

Finally, it is very important to allow continual development of the hybrid product (R7), because only<br />

this allows considering new and changing customer demands during the runtime of a hybrid product<br />

contract.<br />

Methods for the development of hybrid products<br />

In the following section, we describe different methods for developing hybrid products. Subsequent<br />

we evaluate them against the requirements declared above.<br />

The SCORE method helps information technology (IT) solution providers to benefit from a successive<br />

standardization and reuse of non-differentiating product service- bundles. As a comprehensive<br />

standardization of IT solutions is likely to fail, the SCORE method supports the successive<br />

modularization of IT solutions. In particular, the method suggests to identify system service modules<br />

that capture single hardware and software components as well as associated services to support a<br />

simultaneous standardization and reuse of product service bundles in little (Böhmann, Langer et al.<br />

2008).<br />

The SCORE method fulfils multiple requirements stated above. On the one hand it supports the<br />

development of integrated products and services, because every built module consists of product parts<br />

and services that are provided during the product’s lifecycle. Moreover, modules are linked to single<br />

and grouped features that represent a customer added value, such that the customer does not choose<br />

the product because of its characteristics, but because of the results that are promised by the features.<br />

On the other hand, the SCORE method supports modularization. Every module represents a feature<br />

group and is linked loosely to other modules. This loose linkage of modules provides flexibility and<br />

improves customer integration by combining modules that are relevant to the customer’s<br />

requirements. Additionally the SCORE method categorizes modules into four different types (process<br />

module, system module type 1 and type 2, integration module), divided by degree of customer<br />

integration and by degree of project integration. Although the SCORE method states that integrated<br />

product service development and collaboration is important, no explicit recommendation on how to<br />

support them is given. Furthermore, the SCORE method uses a different approach, considering<br />

requirements engineering. As proposed by (Spath 2005) there is a requirements engineering phase at<br />

50


first, where the hybrid product is being elaborated. After this phase, the developed hybrid product can<br />

only be adapted slightly to individual customer demands during the bid process in terms of choosing<br />

and combining already standardized modules to the customer’s individual product. However the<br />

SCORE method claims that modularization is performed on successful projects, where experts decide<br />

which individual project services and products have the potential to be implemented in other projects<br />

(Böhmann, Langer et al. 2008). Both appendages, development from scratch and learning from<br />

projects are reasonable. Therefore the SCORE method should be extended.<br />

A conceptual approach for integrating products and services through service oriented architectures<br />

(SOA) is described by Beverungen et al. (2008). There, the attempt to implement a SOA to provide<br />

flexibility in terms of adoption of changing customer demands and modifying organizational<br />

structures of cooperation is described. The introduced concept consists of three phases. The first phase<br />

“preparation” is used to identify a first regulatory concept, which helps to obtain a business process<br />

overview of the analyzed customer. The identified business processes must comply with certain<br />

specifications (e.g. every function must contain the supporting information system, the degree of<br />

automation and executing departments). In the phase “service analysis” business processes are<br />

extended in terms of customer integration. In doing so every process function is examined, if certain<br />

functions can be outsourced to a third party provider. Additionally these functions are examined on, if<br />

information ought to be published. Additionally, only if process functions can be transferred to<br />

another party (external factor), the service is further examined in terms of automation. If not, the<br />

function is not being analyzed any longer. Only if the selected service can be automated it is<br />

furthermore analyzed. There exist different other criteria that help to build services, but these are<br />

mostly IT related (Legner and Heutschi 2007). In the last phase “service specification”, the identified<br />

automated services are further categorized and correlated. The categorization is done by basic<br />

features, whereas services are divided into basic services and process services. Basic services are<br />

additionally divided into entity services and task services (Beverungen, Knackstedt et al. 2008).<br />

The concept describes a way of how to formalize business processes and how to transfer them in a<br />

form, which is computable. The SOA concept helps to develop integrated product service bundles in<br />

some aspects. It shows that the classification of modules (services) is possible by identifying<br />

interfaces between human resources or organizations, instead of products. Additionally, the usage of<br />

SOA might be a way to overcome communication problems between different engineering<br />

departments, because there is a clear definition of input and output interfaces. Moreover the<br />

compatibility of the proposed SOA structure with BPEL (Business Process Execution Language)<br />

allows integrating business processes of the customer in the development of integrated product service<br />

bundles.<br />

The framework for developing product service systems by Botta (2007) is another method to do<br />

integrated product-service development. The connection between attributes and features is seen as a<br />

key success factor for integrated product service bundle development. The development process is<br />

considered as an iterative process starting with requested attributes for the product to be developed.<br />

This list of requested attributes forms the basis (Botta 2007). Then, existing features are examined and<br />

actual attributes are derived which mainly describe the behavior of the product service bundle.<br />

Afterwards the requested attributes are matched against the actual attributes and further demand of<br />

development is derived from this gap analysis. The process is reiterated until the gap analysis returns<br />

minimal differences. Moreover the approach (Botta 2007) supports the building of components and<br />

the description of dependencies between them. Synthesis, a central term in product development, is<br />

considered as identifying all features that are required to solve a customer problem, whereas features<br />

are described by attributes (Botta 2007). During the analysis process, which is conducted<br />

complementary to the process of synthesis, actually implemented features are examined and attributes<br />

are derived. The goal of this process is to verify the chosen features in terms of impacts and effects.<br />

By introducing features and attributes and identifying relations between them, the art presents a way<br />

51


to integrate products and services. At the same time it fails to give certain advice on how to integrate<br />

the customer into the development or individual configuration of product-service bundles.<br />

The article by Thomas et al describes a development methodology for product-service systems with<br />

which the characteristics of material and service components can be systematically derived on the<br />

basis of properties resulting from customer requirements (Thomas, Walter et al. 2008). Characteristic<br />

for this development process is the fact that the fulfillment of the characteristics required by the<br />

customer is not linked to a material or service component from the first. This happens during the<br />

development process of the product-service system. The approach introduced here adequately<br />

integrates existing approaches for product development and service engineering (e.g. (Botta 2007)).<br />

At first, the authors present a regulation framework consisting of two nested circles (outer customer<br />

circle and inner developer circle). The outer customer circle starts with a customer requirements<br />

engineering phase, which then leads to activities, where target features of the to be hybrid product are<br />

derived (part of the inner circle). These targeted features fully depend on customer requirements and<br />

are therefore changed only once per run. The authors additionally suggest transforming the language<br />

of the customer, which used to describe requirements into the developer’s language by using different<br />

means (e.g. technical literature, domain experts). The following phase of synthesizing product and<br />

service features is very important for hybrid products and conducted in two steps: The first step<br />

describes the structure of features that define the subsequent hybrid product. In the second step these<br />

features are related to attributes and classified by different criteria. The following phase of analyzing<br />

the present attributes of the hybrid product is connected to the synthesis. There the present attributes<br />

are elaborated and a gap analysis (e.g. usability test) is started to find the missing pieces between<br />

targeted and present attributes of the hybrid product. After several iterations and if the identified gap<br />

is small enough the hybrid is produced finally (Thomas, Walter et al. 2008).<br />

The methods introduced above are compared in Table 1. As one can see, none of the described<br />

methods fulfills all requirements.<br />

52<br />

Methods for<br />

developing hybrid<br />

products<br />

Requirements for developing hybrid products<br />

Decomposition<br />

of partial<br />

performances<br />

(R1)<br />

Systematic<br />

recomposition<br />

of partial<br />

performances<br />

(R2)<br />

Integrated<br />

product and<br />

service<br />

development<br />

(R3)<br />

Framework for<br />

developing<br />

PSS (Botta<br />

2007)<br />

Property-based<br />

Design<br />

Property-based<br />

Design<br />

Simultaneous<br />

development<br />

SCORE Method<br />

(Böhmann, Langer<br />

et al. 2008)<br />

Delivery Elements<br />

Modules and<br />

module types<br />

Module consists of<br />

products and<br />

services<br />

SOA Method<br />

(Beverungen,<br />

2008)<br />

service analysis<br />

of functions<br />

Definition of<br />

interfaces<br />

No particular<br />

suggestions<br />

Hybrid product<br />

framework<br />

(Thomas,<br />

Walter et al.<br />

2008)<br />

Property-based<br />

Design<br />

Property-based<br />

Design<br />

Synthesis<br />

Phase


Conclusions<br />

Collaboration<br />

support (R4)<br />

Encapsulation<br />

of customer<br />

individual parts<br />

(R5)<br />

Customer<br />

individual<br />

configuration<br />

(R6)<br />

Further<br />

development of<br />

performance<br />

parts (R7)<br />

Multiple runs<br />

Integrated<br />

Solution<br />

Integrated<br />

Solution<br />

LifeCycle<br />

product<br />

oriented<br />

No particular<br />

suggestions<br />

Integration<br />

Modules<br />

Hybrid product<br />

consists of<br />

different modules<br />

Explicitly<br />

supported through<br />

transformation of<br />

SLM2 to SLM1<br />

Definition of<br />

interfaces<br />

Classification of<br />

functions<br />

No particular<br />

suggestions<br />

Definition of<br />

interfaces allows<br />

independent<br />

development<br />

Multiple runs<br />

Integrated<br />

Solution<br />

Integrated<br />

Solution<br />

Hybrid product<br />

consists of<br />

different<br />

modules<br />

Legend: Requirement fulfilled / Requirement partially fulfilled / Requirement not<br />

fulfilled<br />

Table 2: Comparison of methods for developing hybrid products<br />

Although there has been a lot of research regarding product service systems, there are various aspects<br />

that still present challenges for understanding hybrid products from the point of view of researchers<br />

and practitioners.<br />

One aspect not really understood, are the wide spread characteristics of hybrid products. How do<br />

hybrid products targeted at consumers differ from those that are targeted at business customers?<br />

Maybe, integration into business processes is a distinguishing feature.<br />

Another aspect is the role of IT and IS in the context of hybrid products. Automation of services and<br />

usage-based pricing call for a strong role of IT in hybrid products and drive the need for considering<br />

the integration of IS development into the development process of hybrid products. Research is<br />

needed where IT can take the role as facilitator as well as enabler in designing, managing and<br />

delivering hybrid products.<br />

A key factor for hybrid products is the flexible adjustment to customer needs, primarily targeted at<br />

business customers. So the modeling and the architecture of hybrid products poses new challenges<br />

that can not be answered solely by the models and methods from product, service and software<br />

engineering. There is a need for models that consider the hybrid product and are able to map<br />

requirements to the different parts of the hybrid product and are able to translate the “hybrid product<br />

requirements” to product, service and software requirements. Another challenge is to track change in<br />

this complex requirements network and respect the different characteristics of each part, regarding<br />

costs, lifecycle and ability to change. For example if requirements change for the hybrid product, it is<br />

more feasible to respond to those changes by changing the software or service, as it is to change the<br />

product.<br />

53


What are the requirements for the IS architecture of hybrid products, if they have to be easily<br />

integrated into customers’ business processes? How do new organizational structures providing hybrid<br />

products, especially value networks, influence the architecture of IS among the different players in<br />

such a network?<br />

Another question refers to the lifecycle management of hybrid products. Are traditional lifecycle<br />

models appropriate for designing and managing hybrid products? Different components of hybrid<br />

products have various lifecycles that are influenced by diverse internal and external factors. So a need<br />

arises to model lifecycles for different layers of a hybrid product and to manage dependencies<br />

between those layers and components.<br />

54


References<br />

Baines, T. S., H. W. Lightfoot, et al. (2007). "State-of-the-art in product-service systems." Journal of Engineering<br />

Manufacture 221: 1543-1552.<br />

Baldwin, C. Y. and K. B. Clark (2000). The power of modularity. Cambridge, Mass., MIT Press.<br />

Beverungen, D., R. Knackstedt, et al. (2008). "Entwicklung Serviceorientierter Architekturen zur Integration von<br />

Produktion und Dienstleistung." <strong>Wirtschaftsinformatik</strong>.<br />

Böhmann, T., P. Langer, et al. (2008). "Systematische Überführung von kundenspezifischen IT-Lösungen in<br />

integrierte Produkt-Dienstleistungsbausteine mit der SCORE Methode." <strong>Wirtschaftsinformatik</strong> 50(3).<br />

Bonnemeier, S., C. Ihl, et al. (2007). Wertschaffung und Wertaneignung bei hybriden Produkten - Eine<br />

prozessorientierte Betrachtung. Arbeitsbericht. München.<br />

Botta, C. (2007). Rahmenkonzept zur Entwicklung von Product-Service Systems, Eul-Verlag.<br />

Bullinger, H.-J. and A.-W. Scheer, Eds. (2003). Service Engineering - Entwicklung und Gestaltung innovativer<br />

Dienstleistungen. Berlin, Heidelberg, New York, Springer.<br />

Burianek, F., C. Ihl, et al. (2007). Typologisierung hybrider Produkte - Ein Ansatz basierend auf der Komplexität<br />

der Leistungserbringung. Arbeitsbericht. München, <strong>Lehrstuhl</strong>s <strong>für</strong> Betriebswirtschaftslehre –<br />

Information, Organisation und Management der Technischen Universität München.<br />

Burianek, F., C. Ihl, et al. (2007). Vertragsgestaltung im Kontext hybrider Wertschöpfung. Arbeitsbericht.<br />

München.<br />

Ernst, G. (2007). Hybride Wertschöpfung. Bonn, DLR.<br />

Galbraith, J. R. (2002). "Organizing to Deliver Solutions." Organizational Dynamics 31(2): 194-207.<br />

Ganz, W. (2006). Designing New Hybrid Services and Dealing with Business Transformation Processes.<br />

Helsinki, Finland, Fraunhofer IAO.<br />

Goedkoop, M. J., C. J. G. v. Halen, et al. (1999) "Product Service systems - Ecological and Economic Basics."<br />

Volume, DOI:<br />

Kleinaltenkamp, M. (2001). Begriffsabgrenzungen und Erscheinungsformen von Dienstleistungen. Handbuch<br />

Dienstleistungsmanagement. M. Bruhn and H. Meffert. Wiesbaden, Gabler.<br />

Koffka, K. (1935). Principles of Gestalt Psychology. New York, Harcourt-Brace.<br />

Legner, C. and R. Heutschi (2007). SOA Adoption in Practice – Findings from Early SOA Implementations.<br />

Proceedings of the 15th European Conference on Information Systems, St. Gallen.<br />

Leimeister, J. M. and C. Glauner (2008). "Hybride Produkte - Einordnung und Herausforderungen <strong>für</strong> die<br />

<strong>Wirtschaftsinformatik</strong>." <strong>Wirtschaftsinformatik</strong>.<br />

Lindemann, U. (2006). Methodische Entwicklung technischer Produkte: Methoden flexibel und situationsgerecht<br />

anwenden. Berlin, Springer.<br />

Meier, H., D. Kortmann, et al. (2006). Hybride Leistungsbündel in kooperativen Anbieter-Netzwerken. Industrie<br />

Management. 22: 25-28.<br />

Miller, D., Q. Hope, et al. (2002). "The problem of solutions: Balancing clients and capabilities." Business<br />

Horizon(March-April): 3-12.<br />

Mont, O. K. (2002). "Clarifying the concept of product–service system." Journal of Cleaner Production 10: 237-<br />

245.<br />

Osram. (2008). "OSRAM|About Us|Society and the Environment - Global Care|Products and the<br />

environment|Off-Grid Lighting|Details|Concept|index." Retrieved 25.06.2008.<br />

Osram. (2008). "OSRAM|About Us|Society and the Environment - Global Care|Products and the<br />

environment|Off-Grid Lighting|Details|index." Retrieved 25.06.2008, 2008.<br />

Sawhney, M. (2006). Going beyond the Product: Defining, Designing and Devlivering Customer Solutions. The<br />

Service-dominant Logic of Marketing. R. F. Lusch. Armonk, M. E. Sharpe: 365-380.<br />

Schmitz, G. (2008). Der wahrgenommene Wert hybrider Produkte: Konzeptionelle Grundlagen und<br />

Komponenten. MKWI, München.<br />

Spath, D. (2005). Entwicklung hybrider Produkte - Gestaltung materieller und immaterieller Leistungsbündel.<br />

Service Engineering - Entwicklung und Gestaltung innovativer Dienstleistungen. H.-J. Bullinger and<br />

A.-W. Scheer. Berlin, Springer: 463-502.<br />

Thomas, O., P. Walter, et al. (2008). "Product-Service Systems: Konstruktion und Anwendung einer<br />

Entwicklungsmethodik." <strong>Wirtschaftsinformatik</strong> 50(3).<br />

Tukker, A. (2004). "Eight types of product-service system: eight ways to sustainability? Experiences from<br />

suspronet." Business Strategy and the Environment 13: 246-260.<br />

55


Tuli, R., A. Kohli, et al. (2005). "Rethinking Customer Solutions: From product Bundles to Relational<br />

Processes." Journal of Marketing.<br />

56


Abstract<br />

Technology Acceptance Research – current development and<br />

concerns<br />

Mark Bilandzic, Uta Knebel, Daniela Weckenmann<br />

Literature Review<br />

Technische Universität München<br />

<strong>Lehrstuhl</strong> <strong>für</strong> <strong>Wirtschaftsinformatik</strong> (I 17)<br />

Boltzmannstr. 3 – 85748 Garching<br />

markbilandzic@gmail.com; {knebel; daniela.weckenmann}@in.tum.de<br />

Technology Acceptance (TA) research is considered to be one of the most mature research fields in<br />

the information systems discipline. TA research has produced several well-established acceptance<br />

models, among them the Technology Acceptance Model (TAM) and the relatively recent Unified<br />

Theory of Acceptance and Use of Technology (UTAUT), that have been broadly used for a multitude<br />

of IS research projects in many domains and contexts. However, various authors have raised heavy<br />

concerns against past TA research lately, and demand major changes in or even a shift of existing<br />

paradigms. Goals of this paper are: a) to provide an overview of established TA models and their<br />

origins, b) to present major theoretical concerns and points of criticism associated with these models,<br />

c) to give recommendations to researchers who might want to apply TA models in their research.<br />

Development of Technology Acceptance Research<br />

The goal of many information systems is to increase people’s productivity and efficiency by helping<br />

them to perform specific task. However, a perfect implementation of a technical solution does not<br />

necessarily mean that users will adopt and actually use the system. Technology acceptance research<br />

examines how people come to accept and use a technology, i.e. an information system. There are<br />

several theories and approaches to understand technology acceptance behavior. In this paper, we will<br />

present two models in more detail: the Technology Acceptance Model (TAM), because it is probably<br />

the most well-known model in the field, and the recent Unified Theory of Acceptance and Use of<br />

Technology (UTAUT), as it comprises all of the other major models. We will also point out the<br />

difference between utilitarian and hedonic system acceptance.<br />

Technology Acceptance Model<br />

TAM is being referred to as the most influential and commonly used theory in information systems.<br />

Introduced by Davis (Davis, F.D. 1986), TAM bases on the Theory of Reasoned Action (TRA)<br />

(Fishbein, M./Ajzen, I. 1975) and suggests that the adoption of technology is determined by two<br />

constructs in the context of personal salient believes: It introduces perceived usefulness (PU) and<br />

57


perceived ease of use (PEOU) as direct determinants for the intention to use a technology and<br />

consequently the acceptance and actual use behavior of people in respect to new technologies.<br />

Historically, the TRA suggests that a person’s behavioral intention, which is assumed to be a key<br />

driver for his or her actual behavior, depends on two constructs. The first construct is the person’s<br />

personal attitude, i.e. the strength of intention to perform this behavior. The bigger his intention is to<br />

perform a certain behavior, the bigger the chance is that he will actually do so. The second construct is<br />

the subjective norm, i.e. the level to which the respective person believes significant others think he or<br />

she should perform the planned behavior. One issue about the TRA is that it assumes behavioral<br />

intention to be an exclusive determinant for a person’s actual behavior, ignoring the existence of<br />

personal control over one’s actual behavior. Ajzen addresses this issue with the Theory of Planned<br />

Behaviour (TPB), which links a person’s behavioral intention and the actual behavior, by introducing<br />

a new variable “perceived behavioral control”. TRA and TPB have been introduced as theories to<br />

explain general behavior intention and actual behavior and then been adapted to the context of<br />

technology acceptance, which is finally represented by TAM. During the last two decades, many<br />

studies have investigated and reiterated TAM in different domains and mostly agreed PU as well as<br />

PEOU to be significant determinants for the people’s intention to use and actual use of technology.<br />

Unified Theory of Acceptance and Use of Technology (UTAUT)<br />

Over the past 40 years, several different models trying to explain technology acceptance were<br />

developed and refined, leading to relatively unconnected and even “competitive” research approaches.<br />

In an effort to harmonize these competing approaches and integrate the existing knowledge into one<br />

powerful model, Venkatesh, Morris et al. (Venkatesh, V. et al. 2003) proposed the Unified Theory of<br />

Acceptance and Use of Technology (UTAUT), a model of technology acceptance based on conceptual<br />

and empirical similarities across eight widely used technology acceptance theories (for the theories<br />

and their interrelations see Figure 1).<br />

58<br />

Figure 1: Models and Theories integrated in UTAUT (own illustration)<br />

With UTAUT, Venkatesh et al. integrate eight previously established individual models on user<br />

acceptance of information technology to one unified model by identifying common determinants and<br />

moderators. After empirically analyzing and comparing the individual determinants of those models,<br />

Venkatesh et al. identified similarities across the antecedents of behavioural intention and use<br />

behaviour across all eight models (see Table 1) and compared the explanation power of these


constructs. As a result, they solidify the antecedents and their measurement scales and propose three<br />

direct determinants for people’s intention to use (Performance Expectancy, Effort Expectancy, Social<br />

Influence), and two direct determinants for their actual usage behaviour (Facilitating Conditions,<br />

Behavioural Intention).<br />

Table 1 Roles of determinants in existing models (own illustration)<br />

Apart from the antecedents, UTAUT culminates up to four moderating influences from various<br />

models on each determinant and its relationship to intention and usage of technology. Those for<br />

example include demographic characteristics like gender and age, organizational context (mandatory<br />

or voluntary usage) and user experience with the given technology. Figure 1 illustrates the elements of<br />

UTAUT.<br />

Performance Expectancy<br />

Effort Expectancy<br />

Social Influence<br />

Facilitating Conditions<br />

Determinants<br />

Key Moderators<br />

Gender<br />

Age<br />

Experience<br />

Figure 1: UTAUT (Venkatesh, V. et al. 2003)<br />

Behavioral<br />

Intention<br />

Use<br />

Behavior<br />

Voluntariness of Use<br />

Another outcome of their initial review is that five limitations in prior tests in terms of validating the<br />

models exist. First, the technologies that have been studied are not as complex and sophisticated as<br />

requested from companies. Second, participants of the studies mainly have been students and research<br />

was not conducted using data collected in organizations. Third, there was no ideal timing of<br />

measurement, because behaviour has already become routinized. Therefore the individuals were<br />

already familiar with the system at the time of measurement. Fourth, measurement at different stages<br />

of user-experience was missing in the studies. Fifth, the voluntary context of all studies is not<br />

transferable to a mainly mandatory context in organizations (Venkatesh, V. et al. 2003). In their<br />

empirical test of UTAUT, they made efforts to address these limitations (Venkatesh, V. et al. 2003).<br />

As a result, UTAUT was able to account for 70 percent of the variance in usage intention (Venkatesh,<br />

V. et al. 2003).<br />

59


Utilitarian versus Hedonic Systems<br />

The models described above explain user acceptance for productivity-oriented (or utilitarian)<br />

information systems which aim to provide instrumental value to the user. From our point of view an<br />

interesting element exists within research on hedonic (or pleasure-oriented) information systems.<br />

Hedonic systems aim to provide self-fulfilling value to the user. In parallel to the research on UTAUT<br />

van der Heijden (van der Heijden, H. 2004) studied the differences between utilitarian and hedonic<br />

systems on the base of TAM (Davis, F.D. 1986), regarding the two constructs perceived usefulness<br />

(PU) and perceived ease of use (PEOU). The interesting additional element is perceived enjoyment<br />

(PE). This is defined as “the extent to which the activity of using the computer is perceived to be<br />

enjoyable in its own right, apart from any performance consequences that may be anticipated” (Davis,<br />

F.D./ Bagozzi, R.P./Warshaw, P.R. 1992). Figure 2 shows his research model.<br />

60<br />

Figure 2: Research model for acceptance of hedonic systems (van der Heijden, H. 2004)<br />

Van der Heijden recommends transferring the results to utilitarian systems, because the construct<br />

perceived enjoyment is regarded as important for productivity-oriented information systems. From his<br />

point of view “it seems useful to embark on systematic research on how hedonic features could add to<br />

the acceptance of utilitarian systems” (van der Heijden, H. 2004).<br />

Current Developments and Status Quo of Technology Acceptance Research<br />

Since the introduction of TAM by Davis (Davis, F.D. 1986), most of the work and research has been<br />

conducted on little modifications, incremental tweaking and application of TAM to different domains<br />

of IS. However the outcome of this work has only produced little additional knowledge to the original<br />

version of TAM. Due to the many different TAM variations, there is theoretical confusion about the<br />

importance of some determinants and influencing relationships in adoption research. Historically the<br />

overall research on technology acceptance has iterated the original TAM model and extended to<br />

various settings and domains. Today we have various slightly different models with an overall amount<br />

of variables moderating the key determinants of user acceptance that has reached a number of 21<br />

across all related studies in the last two decades (Lee et al. 2003). This has led to confusion for<br />

researchers which adoption model to use for which of their use cases. The most recent development in<br />

the TAM evolution has been reached with the Unified Theory of Acceptance and Usage of<br />

Technology (UTAUT) (Venkatesh, V. et al. 2003). UTAUT merges all previous theories and<br />

evolutionary variations of TAM again into one unified theory. On the one hand the UTAUT approach<br />

seems to be a progress, since it outperforms individual theories like TRA, TPB or TAM in predicting<br />

user acceptance (Venkatesh, V. et al. 2003), on the other hand it demonstrates the ironic evolution of<br />

TAM: TAM originated as a simplified version of the TRA (Fishbein, M./Ajzen, I. 1975), and was then<br />

adapted to the IT context by cutting down irrelevant determinants (Davis, F.D. 1986). From then on a<br />

huge wave of studies applied TAM on different domains and created many variants of it (Benbasat,<br />

I./Barki, H. 2007), reaching the latest development in technology acceptance modeling with UTAUT,<br />

that tries to accomplish the exact reverse effect by unifying all individual theories again. Such<br />

synthesises provide hints that the research field has reached a certain level of maturity, as many of the


ecent studies only propose replications and minor incremental extensions to the old models such as<br />

TAM. This situation has raised much critique on technology acceptance research as a whole. In 2007,<br />

the Journal of the Association of Information Systems dedicated a whole issue on theoretical concerns<br />

with that model.<br />

Theoretical Concerns with the Research Field and Evolution of Technology<br />

Acceptance Research<br />

Research on TAM has primarily focused on investigating which factors are responsible for people<br />

actually making use or not making use of certain technology. Indeed many researchers agree that<br />

TAM has essentially contributed to our understanding that certain determinants, namely perceived<br />

usefulness and perceived ease of use highly influence people’s actual use behavior on new technology<br />

(Bagozzi, R.P. 2007; Benbasat, I./Barki, H. 2007; Goodhue, D.L. 2007). Moreover, its key<br />

determinants have been confirmed by various research studies in different settings and domains.<br />

However, most recently in its evolution, the general research field on technology adoption which<br />

mostly bases on TAM, has been widely criticized (Journal of the Association for Information Systems<br />

2007) and referred to have lost track of its original research objective (Benbasat, I./Barki, H. 2007).<br />

Rather than actually applying TAM as a tool to investigate what design artifacts facilitate the key<br />

determinants of TAM, the main focus of studies in the area of technology acceptance has been put on<br />

enhancing TAM as a model, e.g. reconfirming and expanding its key determinants such as perceived<br />

usefulness or ease of use. However, the determinants themselves have been treated as a “black box”.<br />

There is still a huge knowledge gap when it comes to explaining what IT artifacts and designs actually<br />

facilitate the user to perceive a system as useful or ease to use (Benbasat, I./Barki, H. 2007; Goodhue,<br />

D.L. 2007; Straub Jr., D.W./Burton-Jones, A. 2007). Thus, TAM research has only made little<br />

contribution to generate practical knowledge how to build useful systems.<br />

Analyzing the history of TAM, Benbasat and Barki (Benbasat, I./Barki, H. 2007) argue that the<br />

direction of current research studies on TAM only create an “illusion of knowledge”, since they are<br />

basically not providing any actionable advice towards design factors and paradigms how to create and<br />

implement usable systems. Further research should rather investigate influencing factors and system<br />

artifacts that lead to higher usability and perceived usefulness of IT systems. Future studies have to<br />

find an answer to the question what basically makes systems useful and ease to use. This reorientation<br />

of the research field should provide more radical outcome in terms of opening the “black box”. The<br />

new focus should be in investigating the antecedents of the key determinants of perceived usefulness<br />

and perceived ease of use, resulting in a set of applicable constructs and guidelines that informs<br />

system designers how create and implement IT that is more likely to be accepted by users (Benbasat,<br />

I./Barki, H. 2007; Goodhue, D.L. 2007).<br />

Goodhue (Goodhue, D.L. 2007) even suggests going one further step back. From his point of view,<br />

technology acceptance and actual utilization does not necessarily imply better performance. This<br />

means that before even starting to worry about user acceptance of a particular technology, one should<br />

rather think about the question if the technology in question would really increase peoples’<br />

performance once it actually gets accepted and is utilized. If this is not the case, the question about<br />

how to make users accept and use the technology becomes irrelevant. First, one rather has to figure<br />

out how the technology can be adapted towards truly meeting the users’ needs in order to help them<br />

achieving their task. Basically, Goodhue’s point with the task-oriented approach is that new<br />

technology has to be adapted to users’ actual task, before questioning the causes of users accepting or<br />

not accepting this new technology. He argues this limited focus on how to get people accept and<br />

utilize a new technology to be a key shortcoming of TAM. It implicitly assumes that utilization of<br />

61


technology automatically leads to a better performance, which is not always true (Goodhue, D.L.<br />

2007).<br />

Further criticism relates to the methodology used in nearly all studies in the field of technology<br />

acceptance. The independent variables, i.e. perceived usefulness and perceived ease of use, are<br />

mostly measured by interviewing users on their perceptions how useful and ease to use they found the<br />

information technology in question to be. However, the dependent variable, i.e. the user’s actual use<br />

behavior, is mostly gathered using the same method as for the independent variables, i.e. directly<br />

asking questions to the user. Straub and Burton-Jones (Straub Jr., D.W./Burton-Jones, A. 2007) argue<br />

this “common self-reporting method” used for gathering the independent as well as dependent<br />

variables is very likely to create artificial correlations, which wipes out potential theoretical linkages<br />

between them. For example, participants who find that a particular technology is not useful to them<br />

are very unlikely to confess afterwards that they actually use it. This argument attacks the widely<br />

assumed validity of the TAM (Bagozzi, R.P. 2007; Benbasat, I./Barki, H. 2007; Davis, F.D. 1986;<br />

Venkatesh/ Davis/Morris 2007) and its relationship between perceived usefulness and ease of use on<br />

the one hand and actual usage on the other hand. In order to confirm that perceived usefulness and<br />

ease of use truly are determinants of actual use of technology, Straub and Burton-Jones suggest further<br />

studies that carefully avoid such potential methodological artifacts. For information about actual<br />

usage, they propose sources that can be tapped independently from peoples’ perceptions of usefulness<br />

and ease of use. Such data sources could for example be data logs or other indicators of actual use<br />

(Straub Jr., D.W./Burton-Jones, A. 2007).<br />

Bagozzi (Bagozzi, R.P. 2007) proposes a foundation of a completely new paradigm for further<br />

research on user acceptance. The idea is that behavior is the result of one’s individual decision making<br />

process, whereas the decision maker is mainly driven by his personal goal-orientation. In other terms,<br />

individuals decide to act in a way that the effect of their selected behavior helps them to reach their<br />

goal at most. Such a goal-oriented approach assumes that people, who have set specific goals,<br />

normally strive to achieve them, which eventually forms an intention to perform a specific action and<br />

leads to the actual behavior towards achieving the predefined goal. This approach represents the core<br />

of universal decision making processes and provides a basis to investigate causes and effects of<br />

specific behavior.<br />

Practical recommendations<br />

Although a quite mature field in information systems research, technology acceptance research and<br />

established technology acceptance models have been heavily criticized lately. However, the suggested<br />

alternative research approaches are still rather abstract and do not present any concrete alternative<br />

models. Researchers might find themselves in a dilemma, knowing the shortcomings of the<br />

established TA models, but not having any alternative approach available. For those who might have<br />

to or want to draw on the established models, we try to derive some recommendations that may help<br />

them to reduce prevalent points of criticism.<br />

Be aware that TA Models can only explain part of the story. All established TA models explain only<br />

part of the variance in using intention – ranging from 17% to 70% (Venkatesh, V. et al. 2003).<br />

Researchers should try to understand what other factors may influence using intention.<br />

Apply multiple methods and measures to avoid common method variance bias. Measuring two<br />

variables using similar methods introduces a bias in their observed correlation, inflating that<br />

correlation above the true relationship. This threatens the validity of findings. Researchers should<br />

therefore combine multiple methods of data collection. Regarding the measures, Sharma, Yetton and<br />

Crawford (Sharma, R./ Yetton, P./Crawford, J. 2007) found some scales to be subject to less common<br />

62


method variance than others. According to their analysis, researchers should prefer behavioural<br />

continuous measures with open-ended numerical scales to Likert type scales often employed in TA<br />

research.<br />

Do not rely on self-reporting alone. Apart from self-reporting data, researchers may consider to<br />

include system-captured data (e.g. log files, time records etc), which may be especially useful for<br />

measuring system usage. System-captured data is assumed to be free of common method variance<br />

biases (Sharma, R./ Yetton, P./Crawford, J. 2007) and could be used to check self-reported data.<br />

Adapt usage measures to research context. TAM explains IT acceptance, but the DVs in TAM are<br />

usage intentions and usage behavior (Davis, F.D. 1989). It is not clear that either of these constructs<br />

completely captures the notion of acceptance (Trice, A.W./Treacy, M.E. 1986). Burton-Jones and<br />

Straub (Burton-Jones, A./Straub Jr., D. 2006) argue that researchers should pay more attention to the<br />

systematic development of usage measures in particular contexts. Their suggested 2 stage method<br />

(Burton-Jones, A./Straub Jr., D. 2006) provides guidance on this.<br />

References<br />

Bagozzi, R.P. (2007): The Legacy of the Technology Acceptance Model and a Proposal for a Paradigm Shift. In:<br />

Journal of the Association for Information Systems, Vol. 8 (2007) Nr. 4, S. 244-254.<br />

Bandura, A. (1986): Social Foundations of Thought and Action: A Social Cognitive Theory, Prentice-Hall,<br />

Englewood Cliffs, NJ 1986.<br />

Benbasat, I.; Barki, H. (2007): Quo vadis, TAM? In: Journal of the Association for Information Systems, Vol. 8<br />

(2007) Nr. 4, S. 211-218.<br />

Burton-Jones, A.; Straub Jr., D. (2006): Reconceptualizing System Usage: An Approach and Empirical Test.<br />

In: Information Systems Research (ISR), Vol. 17 (2006) Nr. 3, S. 228–246.<br />

Compeau, D.R., and Higgins, Christopher A. (1995): Application of Social Cognitive Theory to Training for<br />

Computer Skills. In: Information Systems Research, Vol. 6 (1995) Nr. 2, S. 118-143.<br />

Davis, F.D. (1986): A Technology Acceptance Model for Empirically Testing New End-User Information<br />

Systems: Theory and Results 1986.<br />

Davis, F.D. (1989): Perceived Usefulness, Perceived Ease of Use and User Acceptance of Information<br />

Technology. In: MIS Quarterly, (1989) Nr. September 1989, S. 319-339.<br />

Davis, F.D.; Bagozzi, R.P.; Warshaw, P.R. (1992): Extrinsic and Intrinsic Motivation to Use Computers in the<br />

Workplace. In: Journal of Applied Social Psychology, Vol. 22 (1992), S. 1111-1132.<br />

Fishbein, M.; Ajzen, I. (1975): Belief, attitude, intention and behavior : an introduction to theory and research,<br />

Addison-Wesley, Reading, Mass. ; ; London [etc.] 1975.<br />

Goodhue, D.L. (2007): Comment on Benbasat and Barki's "Quo vadis, TAM?" article. In: Journal of the<br />

Association for Information Systems, Vol. 8 (2007) Nr. 4, S. 219-222.<br />

Journal of the Association for Information Systems (2007): In, Vol. 8 (2007) Nr. 4.<br />

Lee; Younghwa; Kozar, K.A.; Larsen, K.R.T. (2003): The Technology Acceptance Model: Past, Present, and<br />

Future. In: Communications of the AIS, (2003) Nr. 12, S. 752-780.<br />

Rogers, E.M. (1995): Diffusion of Innovations: Modifications of a Model for Telecommunications. In: Die<br />

Diffusion der Innovationen in der Telekommunikation. Hrsg.: Stoetzer, M.-W.; Mahler, A. Berlin,<br />

Springer 1995, S. 25-38.<br />

Sharma, R.; Yetton, P.; Crawford, J. (2007): Estimating the Effect of Common Methods Variance from<br />

Findings of Mono-method Studies. In: Submission to MISQ, (2007).<br />

Straub Jr., D.W.; Burton-Jones, A. (2007): Veni, Vidi, Vici: Breaking the TAM Logjam. In: Journal of the<br />

Association for Information Systems, Vol. 8 (2007) Nr. 4, S. 223-229.<br />

Taylor, S.; Todd, P.A. (1995): Understanding information technology usage: A test of competing models. In:<br />

Information Systems Research, Vol. 6 (1995) Nr. 2, S. 144-176.<br />

Thompson, R.L.; Higgins, C.A.; Howell, J.M. (1991): Personal Computing: Toward a Conceptual Model of<br />

Utilization. In: MIS Quarterly, Vol. 15 (1991) Nr. 1, S. 124-143.<br />

Triandis, H.C. (1977): Interpersonal behavior, Brooks/Cole Publ., Monterey, California 1977.<br />

63


Trice, A.W.; Treacy, M.E. (1986): Utilization as a dependent variable in MIS research. Paper presented at the<br />

7th International Conference on Information Systems, San Diego, Ca, USA, S. 227–239.<br />

van der Heijden, H. (2004): Hedonic Information Systems. In: MIS Quarterly, Vol. 28 (2004) Nr. 4, S. 695-704.<br />

Venkatesh; Davis; Morris (2007): Dead Or Alive? The Development, Trajectory And Future Of Technology<br />

Adoption Research. In: Journal of the Association for Information Systems, Vol. 8 (2007) Nr. 4, S. 267-<br />

286.<br />

Venkatesh, V.; Davis, F.D. (2000): A Theoretical Extension of the Technology Acceptance Model: Four<br />

Longitudinal Field Studies. In: Management Science, Vol. 46 (2000) Nr. 2, S. 186-204.<br />

Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. (2003): User acceptance of information technology:<br />

Toward a unified view. In: MIS Quarterly, Vol. 27 (2003) Nr. 3, S. 425-478.<br />

64


Requuirements<br />

Engineerring,<br />

Prot totyping aand<br />

Evaluuation<br />

in<br />

IInformatiion<br />

System ms Research<br />

Inntroductionn<br />

Marinaa<br />

Berkovich, HHolger<br />

Hoffm mann, Maximillian<br />

Pühler<br />

WWISSS²<br />

Panel Paper P<br />

Technissche<br />

Universit tät München<br />

<strong>Lehrstuhl</strong> füür<br />

Wirtschaftsi informatik (I 17)<br />

Boltzmaannstr.<br />

3 – 857 748 Garching<br />

berkovic;hhoffmanh;pueh<br />

hler@in.tum.dde<br />

Reesearch<br />

in Infformation<br />

Sysstems<br />

in Germmany<br />

is often n performed bby<br />

constructionn<br />

and evaluat tion of<br />

arttefacts<br />

like moodels<br />

and prottotypes<br />

(Frankk<br />

2000). When<br />

performing such design ooriented<br />

resear rch the<br />

woorks<br />

of e.g. Alexander (11973);<br />

Carrolll<br />

(1989); Hev vner (2004); March (19955);<br />

Markus (2002); (<br />

Nuunamaker<br />

(19991);<br />

Owen (1997);<br />

Simon (1996); Taked da (1990); Vaaishnavi<br />

(20044)<br />

on the creat tion of<br />

arttificial<br />

constrructs<br />

for alteering<br />

reality are part of the t key literaature.<br />

Those works prese ent the<br />

foundations<br />

of design orienteed<br />

research, inncluding<br />

desc criptions of reesearch<br />

framewworks<br />

(e.g. (H Hevner<br />

et al. 2004)) annd<br />

the curse oof<br />

research inn<br />

detail (e.g. (Takeda ( et al. 1990)). Whaat<br />

those gener ral and<br />

unniversally<br />

appplicable<br />

descriptions<br />

are – of course – missing, is a matching of<br />

concrete, ex xisting<br />

meethods<br />

to the vvarious<br />

steps in the researchh<br />

process.<br />

Foor<br />

research inn<br />

Informationn<br />

Systems, taargeting<br />

at ide entifying a reelevant<br />

problem<br />

in practic ce and<br />

soolving<br />

the prooblem<br />

or imprroving<br />

the ressults,<br />

a match hing of methoods<br />

and researrch<br />

steps is needed, n<br />

thaat:<br />

allows the mapping of tthe<br />

problem too<br />

proposed solutions<br />

(i.e. reequirements<br />

enngineering),<br />

lets l the<br />

researcher<br />

creatte<br />

an artefact rrepresenting<br />

tthe<br />

solution (i.e.<br />

prototypingg)<br />

and enables<br />

the quality control c<br />

off<br />

the artefact, wwith<br />

and without<br />

the aspired<br />

user (i.e. ev valuation/piloting).<br />

Figure<br />

1: Mapping<br />

methods/mmethodologies<br />

to Takeda’s (1990)”Desiggn<br />

Cycle”<br />

Taakeda<br />

et al (19990)<br />

describe their design ccycle<br />

with the following five<br />

sub-processes:<br />

1. Awareeness<br />

of the problem: Picking<br />

up a problem byy<br />

comparing the object under<br />

considderation<br />

with tthe<br />

specificatiions.<br />

This is one o aspect of requirements engineering, where<br />

the speecification<br />

of tthe<br />

desired obbject<br />

is determ mined using vaarious<br />

techniquues.<br />

2. Suggesstion<br />

of key cooncepts<br />

needeed<br />

to solve the e problem, andd<br />

65


66<br />

3. Development of artefact using those key concepts as well as various types of design<br />

knowledge. If another unsolved problem is found it is added to the collection of problems<br />

and is tried to be solved during another iteration of the design cycle. Prototyping can be used<br />

as a methodology for those two sub-processes, since it enables the researcher to focus on<br />

implementing the novel features of the system and allows an iterative approach for refining<br />

the system between short build phases.<br />

4. Evaluation of the newly created object in different dimensions. Mismatches e.g. between the<br />

output and the initial collected problems indicate problems during development while<br />

negative user evaluation can also indicate undetected problems or inadequate suggestions.<br />

5. Conclusion which of the created objects to adopt and further adapt.<br />

In the following sections we will give a short introduction to requirements engineering, prototyping<br />

and evaluation research as well as an overview over the current publications and state of the art in<br />

those fields. Therefore, for each section we first present a state of the art overview followed by a<br />

discussion of ongoing research published in major A-journals blications.<br />

Requirements Engineering<br />

In this section we try to give a survey about the aspects of requirements engineering and also about the<br />

methods and tools applied in RE. So, the first step is to clear what a requirement and so requirements<br />

engineering is.<br />

For understanding of requirements engineering it is important to define the term “requirement”. IEEE-<br />

Standard IEEE 610.12-1990 (1990) defines a requirement as: „A requirement is a (1) condition or<br />

capability needed by a user to solve a problem or achieve an objective; (2) condition or capability that<br />

must be met or possessed by a system or system component to satisfy a contract, standard,<br />

specification, or other formally imposed documents; (3) documented representation of a condition or<br />

capability as in (1) or (2)”. For the literature review we selected the top 5 books on requirements<br />

engineering.<br />

The selection is based on the sales rank of Amazon. We have searched for "Requirements<br />

engineering" and "Requirements management" and selected the books used in our review.<br />

There are a lot of definitions of requirements engineering. Boehm (1979) gives the following<br />

definition: „Software requirements engineering is the discipline for developing a complete, consistent<br />

unambiguous specification – which can serve as a basis for common agreement among all parties<br />

concerned – describing what the software product will do (but not how it will do it, this is to be done<br />

in the design specification)“. Nuseibeh and Easterbrook (2000) mean that “ […] software systems<br />

requirements engineering (RE) is the process of discovering that purpose, by identifying stakeholders<br />

and their needs, and documenting these in a form that is amenable to analysis, communication, and<br />

subsequent implementation.” Aurum and Wohlin (2005) define RE as a process referring “to all lifecycle<br />

activities related to requirements”. This process consists of gathering, documentation and<br />

managing requirements. Aurum and Wohlin (2005) distinguish four approaches for describing of<br />

requirements engineering. Thus the process of requirements engineering can be described as linear,<br />

non-linear, spiral or incremental form. For example, Kotonya and Sommerville (1998) present a<br />

process of requirements engineering based on linear concepts. The phases of such a process are<br />

connected by iteration. Macaulay (1996) suggests a linear process model without any interactions.<br />

Also the non-linear models of RE-process are spread. These models differ from the linear models in


their iterative and cyclical nature (Loucopoulos 1995). The concept of the spiral model is handling the<br />

requirements in each round.<br />

Aurum and Wohlin (2005) classify the activities of requirements engineering into elicitation,<br />

interpretation, analysis, documentation, verification and validation, but also change management and<br />

requirements tracing. Sommerville and Kotonya (1998) define the process of requirements<br />

engineering as a summary of activities similar to the definition of Aurum and Wohlin (2005). These<br />

activities are requirements elicitation, requirements analysis and negotiation, requirements<br />

documentation, requirements validation. Macaulay (1996) points out that “Requirements Engineering<br />

is not an isolated front-end activity to a software lifecycle process”. She describes this process as “an<br />

integral part of the larger process connected to other parts through continuous feedback loops”<br />

(Macaulay 1996).<br />

Requirements elicitation means the requirements to be determined. This activity is based on making<br />

consultations with stakeholders, but also examining the system documents, market studies and domain<br />

knowledge. The second activity aims at analyzing the requirements. This phase of requirements<br />

process includes also the negotiations with the stakeholders to decide which requirements are to be<br />

accepted. Requirements documentation means that the accepted and determined requirements to be<br />

documented using natural language or diagrams. Within the process of requirements validation<br />

consistency and completeness of the requirements are to be checked (Somerville/Kotonya 1998).<br />

Approaches of requirements engineering<br />

In this section we try to show some approaches of requirements engineering.<br />

model driven<br />

requirements<br />

engineering<br />

(Versteegen 2007)<br />

visual requirements<br />

engineering<br />

(Versteegen 2007)<br />

knowledge-based<br />

requirement<br />

engineering<br />

(Versteegen 2007)<br />

Model driven RE expands the ideas and concepts of a continuous modeldriven<br />

software engineering to the requirement engineering. Model driven<br />

RE is especially suitable for projects with frequent changes.<br />

The principle of visual requirement engineering is to describe very complex<br />

issues clearly. Visual RE primarily addresses specialist departments, which<br />

write requirement specifications for software to be developed.<br />

The goal of knowledge based RE is to identify flaws in requirements<br />

specification in an early stage systematically. The knowledge base RE<br />

analyzes natural language documents and identifies flaws through a<br />

predefined set of verbs and keywords.<br />

DisIRE (Geisser 2007) DisIRE is an abbreviation for "Distributes Internet-Based Requirement<br />

Engineering" and it is a tool-based method for distributed, internet-based<br />

Requirement Engineering. DisIRE is based on “theoretically founded and<br />

empirically validated approaches of collaborative requirements elicitation”.<br />

agile requirements<br />

engineering (Eberlein<br />

2002)<br />

Table 1: Approaches of requirements engineering<br />

Agile RE means that methods of requirements engineering are incorporated<br />

into agile software engineering. Agile requirements engineering performs<br />

the following RE concepts: customer interaction, validation and<br />

verification, non-functional requirements and change-management.<br />

67


Methods applied in the process of Requirement Engineering<br />

The following section conveys a survey on the methods applied in the different phases of Requirement<br />

Engineering. We define the phases of Requirement Engineering in this paper as follows<br />

(Somerville/Kotonya 1998):<br />

Figure 2: Phases of the Requirement Engineering process<br />

The following aspects compare the different methods and techniques of RE according to the phases of<br />

the Requirement Engineering’s process.<br />

Requirements elicitation<br />

Pohl (2007) points out the following aspects of requirements elicitation: the identification of<br />

requirements’ sources according to the context of the developing system, the elicitation of existing<br />

requirements and the development of innovative requirements. The methods applied for elicitation of<br />

requirements are interview, workshop, observation and perspective based reading, also brainstorming,<br />

prototyping, Mind Maps and checklists (Pohl 2007). Similar methods are also suggested by Aurum<br />

and Wohlin (2005). They propose protocol analysis, apprenticing, goal based approach, scenarios and<br />

viewpoints. Sommerville and Kotonya (1998) define three aspects of structuring the knowledge about<br />

requirements: partitioning, abstraction, projection. The techniques suggested by the authors<br />

(Somerville/Kotonya 1998) are interviews, scenarios, observation, and prototyping. They also mean<br />

that requirements negotiation is just a part of requirements elicitation. As techniques for requirements<br />

negotiation checklists and matrices are suggested.<br />

Analysis & Negotiation of requirements<br />

Conflicts between requirements are existing, because different stakeholders have different views on<br />

the developed system. (Pohl 2007) The goal of negotiation of requirements is to dissolve these<br />

conflicts by analysing the origins of conflicts, applying different methods for conflict dissolving. The<br />

way of resolution of the conflict has to be documented. The techniques for dissolving the conflicts<br />

described by Pohl are the win-win-approach that means to establish trust and willingness to<br />

compromise among the stakeholders and interaction matrix that visualizes and also documents the<br />

conflicts between the requirements. Schienmann (2001) also suggest the win-win-approach to support<br />

the process of negotiation.<br />

Documentation of requirements<br />

Pohl (2007) suggests a natural based documentation of requirements. The applied methods are<br />

different forms of specifications. He also suggests a model based form of requirements’<br />

documentation. This form of documentation is based on modelling requirements with modelling<br />

languages such as UML.<br />

By adding a priority to each requirement the value of the requirements regarding defined criterions is<br />

documented (Pohl 2007). Methods for requirements prioritisation are the following: Ranking, singlecriteria-classification<br />

(mandatory, optional, nice-to-have), Kano-classification, two-criteriaclassification,<br />

matrices of wiegers, cost-value-analysis and a combination of multiple prioritisation<br />

68<br />

requirements<br />

elicitation<br />

requirements<br />

analysis &<br />

negotiation<br />

requirements tracing<br />

requirements<br />

documentation<br />

requirements<br />

validation


techniques. Aurum and Wohlin (2005) use the following methods for prioritisation: Analytic<br />

Hierarchy Process, Cumulative Voting, Numerical Assignment, Ranking, Top-Ten Requirements,<br />

Combining Different Techniques.<br />

Validation of requirements<br />

Pohl (2007) suggests the following methods for validation of requirements: inspection, reviews,<br />

walkthroughs, perspective-based reading and validation through prototypes. Additional activities are<br />

checklists and the constructions of artefacts. Schienmann (2001) sees the validation as part of the<br />

quality-assurance. He proposes two types of validation-methods: review-techniques (inspection,<br />

review, walkthrough) and user-oriented techniques (animation, validation through prototypes).<br />

Sommerville and Kotonya (1998) suggest reviews, prototyping and testing. Rupp (2004) proposes<br />

inspection, prototyping, walkthroughs and checklists as techniques for validation of requirements.<br />

Requirements traceability<br />

“An SRS is traceable if the origin of each of its requirements is clear and if it facilitates the<br />

referencing of each requirement in future development or enhancement documentation” (IEEE 1990).<br />

Pohl (2007) states that the traceability information can be represented as follows: textual references,<br />

hyperlinks, traceability-matrices and graphs. Sommerville and Kotonya (1998) points out that “the<br />

assessment of the impact of a change on the rest of the system” is a critical element in Requirement<br />

Engineering. Thus the traceability means “the ability to describe and follow the life of a requirement,<br />

in both a forward and backward direction, i.e. from its origins, through its development and<br />

specification, to its subsequent deployment and use, and through periods of ongoing refinement and<br />

iteration in any of these phases (Gotel/Finkelstein 1994)”.<br />

Requirements Engineering - research in progress<br />

As Damian et al. (2006) notice, “requirements engineering is an important component of effective<br />

software engineering, yet more research is needed” in this field. (Damian, Chisan 2006) While<br />

reviewing some of the most important literature sources 1 over a period of the last five years, one can<br />

clearly see that there is a tremendous amount of ongoing research. However, research distributes over<br />

a great amount of topics and issues ranging from fundamental discussions why requirements<br />

engineering is an important field of research (Thomas, Hunt 2004) to best practices on how to provoke<br />

more creativity. (Maiden et al. 2004) The following illustration gives an overview of the examined<br />

literature. The numbers in brackets indicate the number of papers found in the particular topic.<br />

Among the 60 papers discussing different aspects and topics of requirements engineering, seven<br />

mainly address the question why requirements engineering is an important subject. While Anton<br />

(Anton 2003), Dromey (Dromey 2006), Gonzales (Gonzales 2005), Spinellis (Spinellis 2007) and<br />

Thomas (Thomas, Hunt 2004) present a more general discussion of this topic, Damian (Damian,<br />

Chisan 2006) and Sommerville (Sommerville, Ransom 2005) provide empirical studies supporting<br />

their statements.<br />

1 Including all category A journals as proposed in WI-Journalliste 2008.<br />

69


Figure<br />

3: Currennt<br />

research in requirementss<br />

engineering 2<br />

Cooncerning<br />

thee<br />

state of the art of requirrements<br />

engin neering, Daviss<br />

et al. (Daviis<br />

et al. 2006)<br />

have<br />

coonducted<br />

a dettailed<br />

review of European literature. By contrast, Robbertson<br />

(Robeertson<br />

2005) focuses f<br />

moore<br />

on the inteerdisciplinaryy<br />

aspects of thhis<br />

field while Neill and Lapplante<br />

(Neill, Laplante, Phi illip A.<br />

20003)<br />

are providding<br />

insights in the state off<br />

practice with h empirical woork.<br />

Annother<br />

fractioon<br />

of authors is presenting best practice es and guideliines,<br />

varying from a “systematic<br />

appproach<br />

for identifying<br />

whiich<br />

parts of thhe<br />

world requi ire your attenttion”<br />

(Jacksonn<br />

2004) to a ir ronical<br />

coolumn<br />

on the state of practiice.<br />

(Maiden 22007)<br />

Some authors a like AAlexander,<br />

Haagge<br />

and Lapp pe, and<br />

Juuristo<br />

et al. (AAlexander<br />

20006;<br />

Hagge, LLappe<br />

2005; Juristo et al.<br />

2007) are ppresenting<br />

co ommon<br />

guuidelines<br />

addrressing<br />

the ttarget<br />

“to immprove<br />

require ements” (Aleexander<br />

20066)<br />

while othe ers are<br />

sppecialized<br />

intoo<br />

specific objeectives;<br />

provooking<br />

creativit ty (Maiden ett<br />

al. 2004), deeveloper<br />

integ gration<br />

(AAlexander,<br />

Keent<br />

Beck 20007)<br />

and requirrements<br />

engin neering in auutomotive<br />

devvelopment.<br />

(W Weber,<br />

WWeisbrod<br />

2003) )<br />

2 In<br />

70<br />

ncluding all categgory<br />

A journals as<br />

proposed in WII-Journalliste<br />

200 08.


The biggest group of researchers is working on different approaches for requirement engineering<br />

purposes. In total, there are eight different approaches, discussed in 11 articles: Lan Chao and Ramesh<br />

are discussing aspects of agile requirements engineering practices (Lan Cao, Ramesh 2008; Ramesh et<br />

al. 2007), Geisser et al. are working on distributed internet-based requirements engineering (DisIRE)<br />

(Geisser et al. 2007), Gonzalez et al. and Kavakli et al. on goal oriented requirements engineering,<br />

(Gonzalez‐Baixauli et al. 2005; Kavakli, Loucopoulos 2006), Gordijn et al. are integrating “two<br />

requirements engineering techniques, i* and e3 value” (Gordijn et al. 2006), Anton and Potts are<br />

proposing functional paleontology as a means for reverse engineering (Anton, Potts 2003), Gault et al.<br />

are discussing immersive scenario-based requirements engineering with virtual prototypes (Gault et<br />

al.), Malcom “uses the discipline of "requirements engineering" to validate the use of Rapid<br />

Application Development (RAD) techniques” (Malcom 2001) and finally, Natt och Dag et al. and<br />

Sawyer et al. are discussing linguistic-engineering approaches. (Natt och Dag, Joachim et al. 2005;<br />

Sawyer et al. 2005)<br />

Thought stakeholders are the probably most important aspect in requirements engineering, only three<br />

articles could be found dealing with them. With their “outcome‐based stakeholder risk assessment<br />

model” (OBSRAM), Woolridge et al. developed “a step‐by‐step approach to identifying stakeholders<br />

during requirements engineering”. (Woolridge et al. 2007) Decker et al. are proposing a wiki-based<br />

approach for stakeholder integration (Decker et al. 2007) and Niu is focusing on developing<br />

stakeholder consensus on software goals. (Niu, Easterbrook 2007)<br />

Early phases of requirements engineering, especially the subject of requirements elicitation, are<br />

discussed by Kandrup and Baniassad et al. (Kandrup 2005; Baniassad et al. 2006). In addition,<br />

Alexander describes the usage of misuse cases in requirements engineering (Alexander 2003),<br />

Robertson the importance of business cases and Sangsoo et al. provide insights into value‐innovative<br />

requirements for unknown customers. (Sangsoo Kim et al. 2008)<br />

After requirements have been elicited, the next major step is the documentation of those requirements.<br />

Bontemps et al. are showing how to get “from live sequence charts to state machines and back”<br />

(Bontemps et al. 2005) and Barbara Norden describes how to write good textual requirements.<br />

(Norden 2007) Accessorily, Azar et al. are discussing how to prioritize requirements and Martin and<br />

Melnik (Martin, Melnik 2008) are describing a method for writing early acceptance tests. (Martin,<br />

Melnik 2008)<br />

Concerning the validation of elicited and documented requirements, Gulla is proposing “a general<br />

explanation component for conceptual modeling in CASE environments”. (Gulla 1995) The aspect of<br />

tracing and tracking requirements is discussed in Dick, Vickers and Hayes et al.. (Dick 2005; Vickers<br />

2007; Hayes et al. 2005)<br />

Regarding aspects of computer supported requirements engineering, Maiden et al. discuss how<br />

“mobile RE tools help elicit stakeholder needs in the workplace”. (Maiden et al. 2007) In addition,<br />

Syeff et al. are addressing this subject by “implementing several requirements applications on mobile<br />

devices.” (Seyff et al. 2006)<br />

Finally, there is a set of miscellaneous and specialized topics given attention to in recent research.<br />

Bhat et al., Damian, Gumm and Hanisch and Corbit are working on issues regarding global software<br />

engineering. (Bhat et al. 2006; Damian 2007; Gumm 2006; Hanisch, Corbitt 2007) The management<br />

of requirements for specialized purposes are covered by the work of Daneva (“ERP requirements<br />

engineering practice”) (Daneva 2004), Kohl (“Requirements engineering changes for COTS‐intensive<br />

systems”) (Kohl 2005) and Moon et al. (“An approach to developing domain requirements…”).<br />

(Moon et al. 2005) Concerning special types of requirements, Regnell et al. are dealing with quality<br />

requirements (Regnell et al. 2008) and Haley et al., Kablawi and Sullivan as well as Mellado et al.<br />

71


(Haley et al. 2008; Keblawi, Sullivan 2006; Mellado et al. 2007) are working on security<br />

requirements. Besides the pure existence of requirements engineering methods and approaches it is<br />

crucial for its success to be well implemented in the organizational practice, a subject, Adam et al. are<br />

working on. (Adam et al. 2008) Another approach, the usage of extreme programming, is proposed by<br />

Drobka et al. (Drobka et al. 2004)<br />

By taking the recent research in the field of requirements engineering into consideration, one can state<br />

that this subject is largely diversified and there are a lot of open questions left. Especially the question<br />

of implementing well working requirements engineering practices as well as issues of stakeholder<br />

integration and communication are highly relevant issues to be worked on in future research.<br />

Prototyping<br />

When presenting a prototype to potential end users, developers can expect more detailed feedback on<br />

several aspects of the prototype. This feedback by the user can range from new or clarified<br />

requirements (Andriole 1994; Schrage 2004) up to an evaluation of the usability of a new system<br />

(Sharp, Rogers, Preece 2007; Te’eni, Carey, Zhang 2006). Developers might also present a prototype<br />

to support the choosing between several alternatives (Andriole 1994, p.3). There are different forms of<br />

prototypes that can be built, depending on what aspect the developer would like to receive feedback to<br />

(Floyd 1983, p. 6-12; Sharp, Rogers, Preece 2007, p.531-538).<br />

Depending on the stage in the development cycle and the question that is tried to be answered,<br />

different kinds of prototypes can come to use. The approaches for developing a prototype can range<br />

from simple sketches on a piece of paper showing the usage scenario (“low-fidelity” prototype) to<br />

software that is, in parts, functional (“high-fidelity” prototypes) (Sharp, Rogers, Preece 2007, p.531-<br />

536; Te’eni, Carey, Zhang 2006, p. 308-309).<br />

Low-fidelity prototypes don’t look very much like the final product, i.e. they just depict scenarios or<br />

user interface designs. Those types of prototypes are rather cheap, simply built and easy to modify, so<br />

they are useful for purposes of exploring new ideas and designs. This is especially true for early stages<br />

in development. High-fidelity prototypes are closer to the final product than low-fidelity prototypes.<br />

They are usually made using materials that one would expect in the final product, or for software<br />

prototypes, already show interactive parts of the final system. Because of this close resemblance of the<br />

final product, high-fidelity prototyping is used for presenting ideas to stakeholders as well as testing<br />

the technical feasibility of certain aspects of the system.<br />

Davis (1992) defines a prototype as follows: “A prototype is the partial implementation of a system<br />

built expressly to learn more about a problem or a solution to a problem”. Riddle (1983) points out<br />

that prototyping is an approach of software development that can be used as “a prelude to preparing a<br />

version that is considered to be complete and fully certified”. A prototype is an immature version of<br />

software that is used as a basis for assessment of ideas and decision. Also Floyd (1983) suggests that<br />

prototyping is a part of software development methodology that provides means of communication<br />

between developers, stakeholders, and environment. Prototyping base on four aspects as: functional<br />

selection, construction, evaluation and further use (Floyd 1983). The first step, functional selection,<br />

means the choice of functions to be introduced by prototyping. Construction means the effort to be<br />

made in order to get prototype available. The next step is evaluation to be taken into consideration if it<br />

brings benefits for next phases of the process. The further use of the prototype can have different<br />

goals. It can be used for learning goals or it can be also a part of a future system.<br />

Floyd distinguishes (1983) three approaches to prototyping: exploratory, experimental and<br />

evolutionary prototyping. The approach of exploratory prototyping considers the aspects of<br />

72


coommunicationn<br />

between softtware<br />

developpers<br />

and users.<br />

This approacch<br />

focuses onn<br />

the early pha ases of<br />

thee<br />

developmennt<br />

process andd<br />

can be utilizeed<br />

in order to explore new aspects of thee<br />

target system m or to<br />

geenerate<br />

new iddeas.<br />

This forrm<br />

of prototypping<br />

is applied d in order to clarify requireements<br />

of the e target<br />

syystem.<br />

Exxperimental<br />

pprototyping<br />

iis<br />

applied to ddemonstrate<br />

a proposed sollution<br />

of a prooblem.<br />

This form fo of<br />

pr rototyping cann<br />

also be consiidered<br />

as an extending<br />

part of the target ssystem’s<br />

speccification.<br />

The e phase<br />

off<br />

the developmment<br />

involvinng<br />

prototypingg<br />

defines the role of the pprototyping<br />

inn<br />

the process. It can<br />

serve<br />

as a commplementary<br />

fform<br />

of specification,<br />

a form<br />

of refinemment<br />

of the sspecification<br />

and a an<br />

inttermediate<br />

steep<br />

between specification<br />

annd<br />

implementa ation.<br />

Evvolutionary<br />

pprototyping<br />

iss<br />

one of the mmost<br />

powerful approaches bbut<br />

it is also very<br />

remote fro om the<br />

orriginal<br />

meaninng<br />

of a prototype.<br />

Instead<br />

of focussin ng on differeent<br />

aspect of f a system lik ke the<br />

exxplorative<br />

or eexperimental<br />

pprototypes<br />

do, , an evolutiona ary prototype is improved aand<br />

extended until it<br />

reaaches<br />

the finaale<br />

functionality<br />

of the targget<br />

system. Th herefore, somme<br />

authors arggue<br />

that it sho ould be<br />

caalled<br />

developmment<br />

in versionns.<br />

Prrototyping<br />

- rresearch<br />

in prrogress<br />

Prrototyping<br />

is aan<br />

important a<br />

reqquirements<br />

sppecifications<br />

faiilures.”<br />

(Daviis,<br />

Venkatesh<br />

doone<br />

much earllier<br />

in the sys<br />

Veenkatesh<br />

20044)<br />

The follow<br />

in software andd<br />

system eng<br />

research<br />

journalls.<br />

3<br />

and relevant ffactor<br />

during software s deveelopment<br />

projects<br />

since ”er rrors in<br />

have been iddentified<br />

as a major contriibutor<br />

to costtly<br />

software project p<br />

2004) Therefoore,<br />

the questi ion arises howw<br />

“user accepttance<br />

testing may m be<br />

stem developmment<br />

process than has tradditionally<br />

beenn<br />

the case.” (Davis, (<br />

wing section prrovides<br />

a glim mpse overvieww<br />

of current pprototyping<br />

re esearch<br />

gineering by observing pu ublications ovver<br />

the last ffive<br />

years in major<br />

Figure<br />

4: Currennt<br />

research in<br />

ncluding all categgory<br />

A journals as<br />

proposed in WII-Journalliste<br />

200 08.<br />

ncluding all categgory<br />

A journals as<br />

proposed in WII-Journalliste<br />

200 08.<br />

3<br />

In<br />

4<br />

In<br />

prototyping 4<br />

73


While considering those twelve publications concerning issues of prototyping in the context of<br />

software and system development, on can identify three major topics of research: Prototyping<br />

approaches, prototyping frameworks and platforms and the application of prototyping.<br />

Berling and Runeson introduce “a systematic approach to the prototyping and the validation of a<br />

system’s performance” (Berling, Runeson 2003). Shortly summarized, they are regarding prototyping<br />

and evaluation as experiments and have confirmed their approach by conducting case studies.<br />

Lancaster is proposing paper prototypes as a means of user interface development (Lancaster 2004).<br />

Regarding the development of frameworks and platforms, Broil et al. are proposing an augmented<br />

reality framework for rapid prototyping purposes (Broil et al. 2005), Athanasas and Dear are dealing<br />

with “complex vehicle systems (CVSs)” (Athanasas, Dear 2004), “prototyping systems that are used<br />

to demonstrate a new functionality inside a prototype vehicle” (Athanasas, Dear 2004), and Wang and<br />

Yang are discussing the “design of a rapid prototype platform for ARM based embedded system” (Rui<br />

Wang, Shiyuan Yang 2004).<br />

The application of prototyping techniques is discussed in Amouh et al. (Amouh et al. 2005) who have<br />

developed a clinical information system. Li et al. are using virtual prototypes for “design verification<br />

and testing of power supply system” (Li et al. 2003) and Furse et al. are using “simple inexpensive<br />

prototyping” (Furse et al. 2004) in a teaching environment.<br />

Accessorily, to the previous described issues on prototyping, Serich is discussing the application of<br />

prototyping in software projects and in which situations one would be well advised to avoid<br />

implementing this technique (Serich 2005). Finally, Davis and Venkatesh are underlining the<br />

necessity of early user acceptance test (Davis, Venkatesh 2004), which current prototyping techniques<br />

do not sufficiently support.<br />

Evaluation Research and Piloting Artefacts<br />

In the following paragraph we present the methods and means for determining the outcome of the<br />

requirements engineering process or the prototyping process respectively. This is one of the central<br />

building blocks in the scientific community, no method or artefact should be presented to the<br />

community without a proper evaluation.<br />

The terms evaluation and evaluation research are used in multiple different meanings, e.g. it is used in<br />

everyday language to specify an assessment of value, a technique for the assessment or the specific<br />

actions taken to quantify the value. In the context of information systems research we are talking<br />

about the assessment of constructs (e.g. methods and artefacts) by an evaluator that has specific<br />

knowledge needed to perform this task. The methods used for evaluation have to be objective and be<br />

grounded on justification criteria and standards (i.e. the rationale behind the criteria can be stated)<br />

(Kromrey 2001; Weiss 1974).<br />

While this narrows down the meaning of the term “evaluation” there still is a multitude of instances of<br />

evaluation science. Chelimsky (1997) thus propose three paradigms as classification schemes for the<br />

different types of evaluation based on the evaluation’s purpose, Stockmann (2007a) describes similar<br />

purposes and adds a fourth one:<br />

74<br />

1. evaluation for accountability (Kromrey (2001), Stockmann (2007a): control purposes)<br />

2. evaluation to support the development<br />

3. evaluation to extend the knowledge base (Stockmann (2007a): findings)


4. evaluaation<br />

to legitimmate<br />

an artefacct<br />

or measure<br />

Figure<br />

5: Purpooses<br />

of evaluattion,<br />

based onn<br />

(Chelimsky 1997; 1 Kromreey<br />

2001; Stockkmann<br />

2007a) )<br />

MMost<br />

of the avaailable<br />

literatuure<br />

concerningg<br />

evaluation is<br />

written fromm<br />

a social reseearch<br />

viewpoi int and<br />

thuus<br />

describes aand<br />

employs mmethods<br />

fromm<br />

empirical soc cial research ( (Stockmann 22007b;<br />

Weiss 1974).<br />

Thhe<br />

hypothesis being tested iin<br />

such a settiing<br />

is “Does the t artefact acccomplish<br />

the tasks it is sup pposed<br />

to accomplish?” ”. This leads tto<br />

three differeent<br />

potential classes c of resuults<br />

(see. Figurre<br />

3).<br />

Figure<br />

6: Possibble<br />

courses of f an evaluationn<br />

(Weiss 1974 4, p.63)<br />

Thhis<br />

model is innsufficient<br />

for<br />

the evaluatioon<br />

of artefact ts in design baased<br />

IS researrch<br />

since the central c<br />

pa art, the designn<br />

that creates the artefact iin<br />

the first pla ace, is not beeing<br />

considereed.<br />

However, Weiss<br />

(1974,<br />

p. 22f) aalso<br />

writes, thhat<br />

the purpose<br />

of evaluati ion is to bencchmark<br />

the efffects<br />

of an artefact a<br />

aggainst<br />

the goalls<br />

it meant to achieve not oonly<br />

in order to help the ddecision<br />

makinng<br />

process (se ee Fig.<br />

X) ), but also to iimprove<br />

futurre<br />

planning of f the program. The latter inttent<br />

indicates tthe<br />

flux of (de esign-)<br />

infformation<br />

intoo<br />

a next designn<br />

phase.<br />

Thhis<br />

utilizationn<br />

of evaluatioon<br />

for assessmment<br />

of the “pre-artefact” ” phases is thhe<br />

main thou ught of<br />

Heeinrich<br />

and PPomberger<br />

(20000).<br />

They prropose,<br />

that the<br />

evaluationn<br />

of a prototyype<br />

can be us sed for<br />

reqquirements<br />

enngineering<br />

by making detailled<br />

textual req quirements specification<br />

obbsolete.<br />

The fa act that<br />

thoose<br />

prototypees<br />

can be usedd<br />

to test certain<br />

functionalit ty with potenttial<br />

users in thhe<br />

aspired fiel ld is of<br />

sppecial<br />

interest for computeer<br />

science as well as IS re esearch. Reseearcher<br />

are abble<br />

to perform m very<br />

focused<br />

evaluattions,<br />

e.g. of a system’s ussability<br />

(Niels sen 1993)(USSABILITY<br />

TEESTING)<br />

or look l at<br />

thee<br />

impact of thhe<br />

artefact in thhe<br />

field on a bbroader<br />

basis using pilot stuudies.<br />

Acctivity<br />

theory emphasizes tthe<br />

fact, that aartefacts<br />

that are a being put to use directly<br />

by a human n, have<br />

whhat<br />

a Bannonn<br />

call a “doubble<br />

character” (1991). Artef facts themselvves<br />

can be reflected<br />

on, bu ut they<br />

75


also mediate the users’ activities with the environment. When evaluating this mediating character of<br />

artefacts the evaluation should take place in the aspired field and with the aspired users<br />

(Schwabe/Krcmar 2000), piloting the artefact. The fundamentals of pilot projects were described by<br />

Szyperski (1971) and Witte (1997) with a focus on the use and usage of new technologies.<br />

Instantiations of pilot project research can be found in (Schwabe/Krcmar 2000), focusing on the<br />

support of city council work, and in (Leimeister/Krcmar 2006), focusing on virtual communities.<br />

Using either approach, i.e. “just” evaluating the artefact against the requirements or evaluating the<br />

artefacts use (and thus also evaluating the requirements), leads to an assessment of the artefact in the<br />

chosen setting as well as input for further iterations of the design oriented research model.<br />

(Abdi‐Jalebi, McMahon 2007)<br />

Conclusion<br />

In this paper we conveyed a survey on the approaches of the requirements engineering, prototyping<br />

and piloting. We have analyzed the most relevant literature and we took a look at the publications in<br />

A-journals proposed in the “WI-Journalliste 2008” of the last five years. In this way we were able to<br />

get the most popular approaches and the most recent research works.<br />

References<br />

Amouh, T.; Gemo, M.; Macq, B.; Vanderdonckt, J.; Gariani, A. W. E.; Reynaert, M. S. et al. (2005):<br />

Versatile clinical information system design for emergency departments. In: Information Technology in<br />

Biomedicine, IEEE Transactions on, Jg. 9, H. 2, S. 174‐183.<br />

Abdi‐Jalebi, E.; McMahon, R. (2007): High‐Performance Low‐Cost Rogowski Transducers and Accompanying<br />

Circuitry. In: Instrumentation and Measurement, IEEE Transactions on, Jg. 56, H. 3, S. 753‐759.<br />

Adam, S.; Eisenbarth, M.; Ehresmann, M. (2008): Implementing Requirements Engineering Processes: Using<br />

Cooperative Self‐Assessment and Improvement. In: Software, IEEE, Jg. 25, H. 3, S. 71‐77.<br />

Alavi, M. (1984): An Assessment of the Prototyping Approach to Information Systems. In: Communications of<br />

the ACM, Vol. 27 (1984) Nr. 4, S. 556-563.<br />

Alexander, C. (1973): Notes on the Synthesis of Form, Harvard University Press, Harvard 1973.<br />

Alexander, I. (2003): Misuse cases: use cases with hostile intent. In: Software, IEEE, Jg. 20, H. 1, S. 58‐66.<br />

Alexander, I. (2006): 10 small steps to better requirements. In: Software, IEEE, Jg. 23, H. 2, S. 19‐21.<br />

Alexander, L.; Kent Beck (2007): Point/Counterpoint. In: Software, IEEE, Jg. 24, H. 2, S. 62‐65.<br />

Andriole, S.J. (1994): Fast, cheap requirements: Prototype, or else! In: IEEE Software, Vol. 11 (1994) Nr. 2, S.<br />

85-87.<br />

Anton, A. I. (2003): Successful software projects need requirements planning. In: Software, IEEE, Jg. 20, H. 3,<br />

S. 44–46.<br />

Anton, A. I.; Potts, C. (2003): Functional paleontology: the evolution of user-visible system services. In:<br />

Software Engineering, IEEE Transactions on, Jg. 29, H. 2, S. 151–166.<br />

Athanasas, K.; Dear, I. (2004): Validation of complex vehicle systems of prototype vehicles. In: Vehicular<br />

Technology, IEEE Transactions on, Jg. 53, H. 6, S. 1835‐1846.<br />

Aurum, A.; Wohlin, C. (2005): Engineering and Managing Software Requirements (1 Aufl.), Springer, Berlin<br />

2005.<br />

Baniassad, E.; Clements, P. C.; Araujo, J.; Moreira, A.; Rashid, A.; Tekinerdogan, B. (2006): Discovering<br />

early aspects. In: Software, IEEE, Jg. 23, H. 1, S. 61‐70.<br />

Bannon, L.J.; Bødker, S. (1991): Beyond the interface: encountering artifacts in use. In: Designing interaction:<br />

psychology at the human-computer interface. Eds.: Carroll, J.M. Cambridge University Press,<br />

Cambridge 1991, p. 227-253.<br />

Berling, T.; Runeson, P. (2003): Efficient evaluation of multifactor dependent system performance using<br />

fractional factorial design. In: Software Engineering, IEEE Transactions on, Jg. 29, H. 9, S. 769‐781.<br />

76


Bhat, J. M.; Gupta, M.; Murthy, S. N. (2006): Overcoming Requirements Engineering Challenges: Lessons<br />

from Offshore Outsourcing. In: Software, IEEE, Jg. 23, H. 5, S. 38‐44.<br />

Boehm, B.W. (1979): Guidelines for verifying and validating software requirements and design specifications,<br />

EURO IFIP 79, North Holland 1979, S. 711-719<br />

Bontemps, Y.; Heymans, P.; Schobbens, P. ‐. (2005): From live sequence charts to state machines and back: a<br />

guided tour. In: Software Engineering, IEEE Transactions on, Jg. 31, H. 12, S. 999‐1014.<br />

Broil, W.; Lindt, I.; Ohlenburg, J.; Herbst, I.; Wittkamper, M.; Novotny, T. (2005): An infrastructure for<br />

realizing custom‐tailored augmented reality user interfaces. In: Visualization and Computer Graphics,<br />

IEEE Transactions on, Jg. 11, H. 6, S. 722‐733.<br />

Carroll, J.M.; Kellogg, W.A. (1989): Artifact as theory nexus: Hermeneutics meets theory based design.<br />

Presented at: SIG CHI, p. 7-14.<br />

Chelimsky, E. (1997): Thoughts for a New Evaluation Society. In: Evaluation, Vol. 3 (1997) Nr. 1, p. 97-109.<br />

Damian, D. (2007): Stakeholders in Global Requirements Engineering: Lessons Learned from Practice. In:<br />

Software, IEEE, Jg. 24, H. 2, S. 21‐27.<br />

Damian, D.; Chisan, J. (2006): An Empirical Study of the Complex Relationships between Requirements<br />

Engineering Processes and Other Processes that Lead to Payoffs in Productivity, Quality, and Risk<br />

Management. In: Software Engineering, IEEE Transactions on, Jg. 32, H. 7, S. 433‐453.<br />

Daneva, M. (2004): ERP requirements engineering practice: lessons learned. In: Software, IEEE, Jg. 21, H. 2, S.<br />

26‐33.<br />

Davis, A. M.; Dieste, O.; Hickey, A. M.; Juristo, N.; Moreno, A. M. (2006): Scientific publication in<br />

requirements engineering in spain: an analysis in a european context. In: Latin America Transactions,<br />

IEEE (Revista IEEE America Latina), Jg. 4, H. 2, S. 55‐61.<br />

Davis, A.M. (1992): Operational prototyping: a new development approach. In: IEEE Software, Vol. 9 (1992)<br />

Nr. 5, S. 70-78.<br />

Davis, F. D.; Venkatesh, V. (2004): Toward preprototype user acceptance testing of new information systems:<br />

implications for software project management. In: Engineering Management, IEEE Transactions on, Jg.<br />

51, H. 1, S. 31‐46.<br />

Decker, B.; Ras, E.; Rech, J.; Jaubert, P.; Rieth, M. (2007): Wiki‐Based Stakeholder Participation in<br />

Requirements Engineering. In: Software, IEEE, Jg. 24, H. 2, S. 28‐35.<br />

Dick, J. (2005): Design traceability. In: Software, IEEE, Jg. 22, H. 6, S. 14‐16.<br />

Drobka, J.; Noftz, D.; Rekha Raghu (2004): Piloting XP on four mission‐critical projects. In: Software, IEEE,<br />

Jg. 21, H. 6, S. 70‐75.<br />

Dromey, R. G. (2006): Climbing over the "No Silver Bullet" Brick Wall. In: Software, IEEE, Jg. 23, H. 2, S.<br />

120‐119.<br />

Eberlein, A.; Leite, J. (2002): Agile Requirements Definition: A View from Requirements Engineering.<br />

Proceedings of the International Workshop on Time-Constrained Requirements Engineering<br />

(TCRE’02). Essen, Germany.<br />

Floyd, C. (1983): A Systematic Look at Prototyping. Paper presented at the Approaches to Prototyping, Namur,<br />

S. 1-18.<br />

Frank, U. (2000): Evaluation von Artefakten in der <strong>Wirtschaftsinformatik</strong>. In: Evaluation und<br />

Evaluationsforschung in der <strong>Wirtschaftsinformatik</strong>. Eds.: Heinrich, L.J.; Häntschel, I. Oldenbourg,<br />

München 2000, p. 35-48.<br />

Furse, C.; Woodward, R. J.; Jensen, M. A. (2004): Laboratory project in wireless FSK receiver design. In:<br />

Education, IEEE Transactions on, Jg. 47, H. 1, S. 18‐25.<br />

Gault, B.; Sutcliffe, A.; Maiden, N.: ISRE: Immersive Scenario-based Requirements Engineering with<br />

Virtual Prototypes.<br />

Geisser, M.; Heinzl, A.; Hildebrand, T.; Rothlauf, F. (2007): Verteiltes, internetbasiertes Requirements-<br />

Engineering. In: <strong>Wirtschaftsinformatik</strong>, Jg. 49, H. 3, S. 199–207.<br />

Gonzales, R. (2005): Developing the requirements discipline: software vs. systems. In: Software, IEEE, Jg. 22,<br />

H. 2, S. 59‐61.<br />

Gonzalez‐Baixauli, B.; Laguna, M.; do Prado Leite, Julio Cesar Sampaio (2005): Applying Personal<br />

Construct Theory to Requirements Elicitation. In: Latin America Transactions, IEEE (Revista IEEE<br />

America Latina), Jg. 3, H. 1, S. 1‐1.<br />

Gordijn, J.; Yu, E.; van der Raadt, Bas (2006): E‐service design using i* and e/sup 3/ value modeling. In:<br />

Software, IEEE, Jg. 23, H. 3, S. 26‐33.<br />

Gotel, O.C.Z.; Finkelstein, C.W. (1994): An analysis of the requirements traceability problem. Requirements<br />

Engineering, 1994., Proceedings of the First International Conference on. Colorado Springs, CO.<br />

77


Gulla, J. A. (1995): A General Explanation Component for Conceptual Modeling in CASE Environments.<br />

Gumm, D. C. (2006): Distribution Dimensions in Software Development Projects: A Taxonomy. In: Software,<br />

IEEE, Jg. 23, H. 5, S. 45‐51.<br />

Hagge, L.; Lappe, K. (2005): Sharing requirements engineering experience using patterns. In: Software, IEEE,<br />

Jg. 22, H. 1, S. 24‐31.<br />

Haley, C. B.; Laney, R.; Moffett, J. D.; Nuseibeh, B. (2008): Security Requirements Engineering: A<br />

Framework for Representation and Analysis. In: Software Engineering, IEEE Transactions on, Jg. 34,<br />

H. 1, S. 133‐153.<br />

Hanisch, J.; Corbitt, B. (2007): Impediments to requirements engineering during global software development.<br />

In: European Journal of Information Systems, Jg. 16, S. 793–805.<br />

Hayes, J. H.; Dekhtyar, A.; Sundaram, S. K. (2005): Improving after‐the‐fact tracing and mapping: supporting<br />

software quality predictions. In: Software, IEEE, Jg. 22, H. 6, S. 30‐37.<br />

Heinrich, L.J.; Pomberger, G. (2000): Prototypingbasierte Evaluation von Software-Angeboten. In: Evaluation<br />

und Evaluationsforschung in der <strong>Wirtschaftsinformatik</strong>. Eds.: Heinrich, L.J.; Häntschel, I. Oldenbourg,<br />

München 2000, p. 201-212.<br />

Hevner, A.R.; March, S.T.; Park, J.; Ram, S. (2004): Design Science in Information Systems Research. In:<br />

MIS Quarterly, Vol. 28 (2004) Nr. 1, p. 75-105.<br />

Hickey, A.M.; Dean, D.L. (1998): Prototyping for Requirements Elicitation and Validation: A Participative<br />

Prototype Evaluation Methodology. Paper presented at the Americas Conference on Information<br />

Systems, Baltimore, S. 798-200.<br />

Hood, C.; Mühlbauer, S.; Rupp, C.; Versteegen, G. (2007): iX-Studie Anforderungsmanagement (Vol. 2),<br />

Heise Zeitschriften Verlag, Hannover 2007.<br />

IEEE Std. 610.12-1990, IEEE Standard Glossary of Software Engineering Terminology. New York: The<br />

Institute of Electrical and Electronics Engineers 1990.<br />

Jackson, M. (2004): Seeing more of the world [requirements engineering]. In: Software, IEEE, Jg. 21, H. 6, S.<br />

83‐85.<br />

Juristo, N.; Moreno, A. M.; Sanchez‐Segura, M. ‐. (2007): Guidelines for Eliciting Usability Functionalities.<br />

In: Software Engineering, IEEE Transactions on, Jg. 33, H. 11, S. 744‐758.<br />

Kandrup, S. (2005): On systems coaching. In: Software, IEEE, Jg. 22, H. 1, S. 52‐54.<br />

Kavakli, E.; Loucopoulos, P. (2006): Experiences with goal‐oriented modeling of organizational change. In:<br />

Systems, Man, and Cybernetics, Part C: Applications and Reviews, IEEE Transactions on, Jg. 36, H. 2,<br />

S. 221‐235.<br />

Keblawi, F.; Sullivan, D. (2006): Applying the common criteria in systems engineering. In: Security & Privacy,<br />

IEEE, Jg. 4, H. 2, S. 50‐55.<br />

Kohl, R. J. (2005): Requirements engineering changes for COTS‐intensive systems. In: Software, IEEE, Jg. 22,<br />

H. 4, S. 63‐64.<br />

Kromrey, H. (2001): Evaluation - ein vielschichtiges Konzept: Begriff und Methodik von Evaluierung und<br />

Evaluationsforschung. In: Sozialwissenschaften und Berufspraxis, Vol. 24 (2001) Nr. 2, p. 105-131.<br />

Lan Cao; Ramesh, B. (2008): Agile Requirements Engineering Practices: An Empirical Study. In: Software,<br />

IEEE, Jg. 25, H. 1, S. 60‐67.<br />

Lancaster, A. (2004): Paper Prototyping: The Fast and Easy Way to Design and Refine User Interfaces. In:<br />

Professional Communication, IEEE Transactions on, Jg. 47, H. 4, S. 335‐336.<br />

Leimeister, J.M.; Krcmar, H. (2006): Systematischer Aufbau und Betrieb Virtueller Communitys im<br />

Gesundheitswesen. In: WIRTSCHAFTSINFORMATIK, Vol. 48 (2006) Nr. 6, p. 407-417.<br />

Li, Q. M.; Lee, F. C.; Wilson, Thomas G. [JR.] (2003): Design verification and testing of power supply system<br />

by using virtual prototype. In: Power Electronics, IEEE Transactions on, Jg. 18, H. 3, S. 733‐739.<br />

Loucopoulos, P. (1995): System Requirement Engineering McGraw-Hill Publishing Co. 1995.<br />

Macaulay, L.A. (1996): Requirement Engineering Springer-Verlag GmbH 1996.<br />

Maiden, N. (2007): From the Horse’s Mouth. In: Software, IEEE, Jg. 24, H. 6, S. 21‐23.<br />

Maiden, N.; Gizikis, A.; Robertson, S. (2004): Provoking creativity: imagine what your requirements could be<br />

like. In: Software, IEEE, Jg. 21, H. 5, S. 68‐75.<br />

Maiden, N.; Omo, O.; Seyff, N.; Grunbacher, P.; Mitteregger, K. (2007): Determining Stakeholder Needs in<br />

the Workplace: How Mobile Technologies Can Help. In: Software, IEEE, Jg. 24, H. 2, S. 46‐52.<br />

Malcom, E. (2001): Requirements acquisition for rapid applications development.<br />

March, S.T.; Smith, G.F. (1995): Design and natural science research on information technology. In: Decision<br />

Support Systems, Vol. 15 (1995) Nr. 4, p. 251-266.<br />

78


Markus, M.L.; Majchrzak, A.; Gasser, L. (2002): A Design Theory for Systems that Support Emergent<br />

Knowledge Processes. In: MIS Quarterly, Vol. 26 (2002) Nr. 3, p. 179-212.<br />

Martin, R. C.; Melnik, G. (2008): Tests and Requirements, Requirements and Tests: A Möbius Strip. In:<br />

Software, IEEE, Jg. 25, H. 1, S. 54‐59.<br />

Mellado, D.; Fernandez‐Medina, E.; Piattini, M. (2007): A Security Requirements Engineering Process in<br />

Practice. In: Latin America Transactions, IEEE (Revista IEEE America Latina), Jg. 5, H. 4, S. 211‐217.<br />

Moon, M.; Yeom, K.; Chae, H. S. (2005): An approach to developing domain requirements as a core asset<br />

based on commonality and variability analysis in a product line. In: Software Engineering, IEEE<br />

Transactions on, Jg. 31, H. 7, S. 551‐569.<br />

Natt och Dag, Joachim; Regnell, B.; Gervasi, V.; Brinkkemper, S. (2005): A linguistic‐engineering approach<br />

to large‐scale requirements management. In: Software, IEEE, Jg. 22, H. 1, S. 32‐39.<br />

Neill, C. J.; Laplante, Phillip A. (2003): Requirements engineering: the state of the practice. In: Software, IEEE,<br />

Jg. 20, H. 6, S. 40–45.<br />

Nielsen, J. (1993): Usability Engineering, Academic Press, Boston 1993.<br />

Niu, N.; Easterbrook, S. (2007): So, You Think You Know Others’ Goals? A Repertory Grid Study. In:<br />

Software, IEEE, Jg. 24, H. 2, S. 53‐61.<br />

Norden, B. (2007): Screenwriting for Requirements Engineers. In: Software, IEEE, Jg. 24, H. 4, S. 26‐27.<br />

Nunamaker, J.F.; Chen, M.; Purdin, T.D.M. (1991): Systems Development in Information Systems Research.<br />

In: Journal of Management Information Systems, Vol. 7 (1991) Nr. 3, p. 89-106.<br />

Nuseibeh, B.A.; Easterbrook, S.M. (2000): Requirement Engineering: A Roadmap. Paper presented at the 22nd<br />

International Conference on Software Engineering, ICSE'00.<br />

Owen, C.L. (1997): Understanding Design Research. Toward an Achievement of Balance. In: Journal of the<br />

Japanese Society for the Science of Design, Vol. 5 (1997) Nr. 2, p. 36-45.<br />

Pohl, K. (2007): Requirement Engineering. Grundlagen, Prinzipien, Techniken (1 Aufl.), Dpunkt Verlag 2007.<br />

Pomberger, G.; Pree, W.; Stritzinger, A. (1992): Methoden und Werkzeuge <strong>für</strong> das Prototyping und Ihre<br />

Integration. In: Informatik Forschung und Entwicklung, Vol. 7 (1992), S. 49-61.<br />

Ramesh, B.; Cao, L.; Baskerville, R. (2007): Agile requirements engineering practices and challenges: an<br />

empirical study.<br />

Ramesh, B.; Cao, L.; Baskerville, R. (2008): Agile requirements engineering practices and challenges: an<br />

empirical study In: Information Systems Journal, Vol. 25 (2008) Nr. 1, S. 60-67.<br />

Regnell, B.; Svensson, R. B.; Olsson, T. (2008): Supporting Roadmapping of Quality Requirements. In:<br />

Software, IEEE, Jg. 25, H. 2, S. 42‐47.<br />

Riddle, W.E. (1983): Advancing the State of the Art in Software System Prototyping. Paper presented at the<br />

Approaches to Prototyping, Namur, S. 19-26.<br />

Robertson, S. (2005): Learning from other disciplines [requirements engineering]. In: Software, IEEE, Jg. 22, H.<br />

3, S. 54‐56.<br />

Rui Wang; Shiyuan Yang (2004): The design of a rapid prototype platform for ARM based embedded system.<br />

In: Consumer Electronics, IEEE Transactions on, Jg. 50, H. 2, S. 746‐751.<br />

Sangsoo Kim; In, H. P. e.; Jongmoon Baik; Kazman, R.; Kwangsin Han (2008): VIRE: Sailing a Blue Ocean<br />

with Value‐Innovative Requirements. In: Software, IEEE, Jg. 25, H. 1, S. 80‐87.<br />

Sawyer, P.; Rayson, P.; Cosh, K. (2005): Shallow knowledge as an aid to deep understanding in early phase<br />

requirements engineering. In: Software Engineering, IEEE Transactions on, Jg. 31, H. 11, S. 969‐981.<br />

Sharp, H.; Y. Rogers; J. Preece (2007): Interaction Design, 2nd. 2nd, Wiley, Chichester, 2007.<br />

Schienmann, B. (2001): Kontinuierliches Anforderungsmanagement . Prozesse - Techniken - Werkzeuge (1<br />

Aufl.), Addison-Wesley 2001.<br />

Schrage, M. (2004): Never Go to a Client Meeting without a Prototype, in: IEEE Software, Vol. 21 (2004), No.<br />

2, pp. 42-45.<br />

Schwabe, G.; Krcmar, H. (2000): Piloting a Socio-technical Innovation. Presented at: 8th European Conference<br />

on Information Systems ECIS 2000, Vienna, p. 132-139.<br />

Serich, S. (2005): Prototype stopping rules in software development projects. In: Engineering Management,<br />

IEEE Transactions on, Jg. 52, H. 4, S. 478‐485.<br />

Seyff, N.; Grünbacher, P.; Neil, M. (2006): Take your mobile device out from behind the requirements desk. In:<br />

Software, IEEE, Jg. 23, H. 4, S. 16‐18.<br />

Simon, H.A. (1996): Sciences of the Artificial. (3 Ed.), MIT Press, Boston 1996.<br />

Somerville, I.; Kotonya, G. (1998): Requirements Engineering: Processes and Techniques (1 Aufl.), Wiley &<br />

Sons 1998.<br />

79


Sommerville, L.; Ransom, J. (2005): An Empirical Study of Industrial Requirements Engineering Process<br />

Assessment and Improvement. In: ACM Transactions on Software Engineering, Jg. 14, H. 1, S. 85–<br />

117.<br />

Spinellis, D. (2007): Silver Bullets and Other Mysteries. In: Software, IEEE, Jg. 24, H. 3, S. 22‐23.<br />

Stockmann, R. (2007a): Einführung in die Evaluation. In: Handbuch zur Evaluation: Eine praktische<br />

Handlungsanleitung. Eds.: Stockmann, R. Waxmann, Münster 2007a, p. 24-70.<br />

Stockmann, R. (Eds.) (2007b): Handbuch zur Evaluation: Eine praktische Handlungsanleitung. Waxmann,<br />

Münster 2007b.<br />

Szyperski, N. (1971): Zur wissensprogrammatischen und forschungsstrategischen Orientierung der<br />

Betriebswirtschaft. In: Zeitschrift <strong>für</strong> betriebswirtschaftliche Forschung (zfbf), Vol. 23 (1971), p. 261-<br />

282.<br />

Takeda, H.; Tomiyama, T.; Yoshikawa, H.; Veerkamp, P. (1990): Modeling Design Processes. Centre for<br />

Mathematics and Computer Science.<br />

Te’eni, D.; J. Carey; P. Zhang (2006): Human Computer Interaction: Developing Effective Organizational<br />

Information Systems, John Wiley & Sons, Hoboken, 2006.<br />

Thomas, D.; Hunt, A. (2004): Nurturing requirements. In: Software, IEEE, Jg. 21, H. 2, S. 13‐15.<br />

Vaishnavi, V.; Kuechler, W. (2004): Design Science in Information Systems.<br />

http://www.isworld.org/Researchdesign/drisISworld.htm, accessed on 08.09.2007.<br />

Versteegen, G. (2007): Neues im Anforderungsmanagement. In: OBJEKTspektrum/Requirements Engineering -<br />

online Ausgabe, (2007).<br />

Vickers, A. (2007): Satisfying Business Problems. In: Software, IEEE, Jg. 24, H. 3, S. 18‐20.<br />

Weber, M.; Weisbrod, J. (2003): Requirements engineering in automotive development: experiences and<br />

challenges. In: Software, IEEE, Jg. 20, H. 1, S. 16‐24.<br />

Weiss, C.H. (1974): Evaluierungsforschung: Methoden zur Einschätzung von sozialen Reformprogrammen,<br />

Westdeutscher Verlag, Opladen 1974.<br />

Witte, E. (1997): Feldexperimente als Innovationstest - Die Pilotprojekte zu neuen Medien. In: Zeitschrift <strong>für</strong><br />

betriebswirtschaftliche Forschung (zfbf), Vol. 49 (1997) Nr. 5, p. 419-436.<br />

Woolridge, R. W.; McManus, D. J.; Hale, J. E. (2007): Stakeholder Risk Assessment: An Outcome‐Based<br />

Approach. In: Software, IEEE, Jg. 24, H. 2, S. 36‐45.<br />

80


Performance of Information System<br />

Bögelsack, A.; Jehle, H.; Gradl, S.; Kienegger, H.<br />

Literature Review<br />

Technische Universität München<br />

<strong>Lehrstuhl</strong> <strong>für</strong> <strong>Wirtschaftsinformatik</strong> (I 17)<br />

Boltzmannstr. 3 – 85748 Garching bei München<br />

{boegelsack, jehle, gradl, kienegger}@in.tum.de<br />

1 Introduction - “If you can’t measure it – you can’t manage it”<br />

(Peter F. Drucker)<br />

To make use of the term “performance”, we need a rating system for performance (Lilja<br />

2000). Thus, the performance can be classified and ranked in metrics to be analyzed,<br />

compared, optimized and used for prognosis. The next chapter will introduce some terms<br />

used by performance metrics and for collecting performance data (Perlis et al. 1981). After<br />

that, performance evaluation will be used to provide key performance indicators for complex<br />

IT-systems (Lucas 1971).<br />

At first, we need to define the term performance. Performance describes the systems<br />

capability to execute given tasks in a given time (John/Eeckhout 2006). The more tasks of a<br />

fixed type a system can finish in a fixed period of time, the higher is its’ performance<br />

(Schmietendorf/Dumke 2003). In the beginning of the development of computer systems, a<br />

systems performance was given by the theoretical reachable number of calculations the<br />

system can perform in a period of time. This method turned out to be not significant because<br />

the systems peripheral architecture may prohibit the execution of tasks at full speed<br />

(Buhr/Fortier 1995). Thus, the process of collection performance data was changed soon to<br />

measuring methods, running special programs to execute tasks and mark the needed time. If<br />

this method is used in order to achieve the highest possible performance value, the reached<br />

performance value is called benchmark (Hoetzel et al. 1998). The term benchmark is the first<br />

example of an introduced metric. The highest possible throughput of tasks in a time is the<br />

most describing value known for IT-systems capabilities. It is used by hardware vendors to<br />

compare their products with competitions ones. Software developers and vendors use<br />

benchmarks to specify the requirements for their product. System landscape designers use<br />

benchmark values for sizing it infrastructures for future needs. As shown above, benchmarks<br />

are often used by different prospects, but measuring performance data is a non trivial task in<br />

81


most cases (Gmach et al. 2008). That is why only few prospect do really measure the<br />

performance and thus collect own benchmarks, most others rely on collected data by others.<br />

In most cases, the benchmark is measured by hardware vendors. Their main focus is to get<br />

out the highest possible value of any benchmark, to survive any comparison with the<br />

competition in the best way. To achieve this, they optimize the measured systems up to<br />

unusable configurations for the every-day usage what makes the collected data true but<br />

meaningless for the operative usage (Saavedra/Smith 1996).<br />

One way to achieve realistic data of the systems performance is to execute representative<br />

tasks of everyday usage of the system and measure the taken time for it on the one hand and<br />

observe the systems status while execution on the other hand. Representative tasks must be<br />

determined by hand. This is often done by taking a detailed look at the logfiles of a system.<br />

It is also possible to take statistic excerpt of so called monitoring-data (see below). After<br />

finding a sequence of often used tasks, this sequence can be defined as the individual<br />

reference-load for the examined landscape. The execution time of this reference sequence<br />

can now be optimized for maximum speed or for optimal resource usage. Optimizing for<br />

maximum speed will generate an individual benchmark – it may be incomparable to other<br />

landscapes, but – if the reference-sequence is well chosen – it is a very good value to<br />

compare own systems during the lifecycles and change-processes (Lacey/Macfarlane 2002).<br />

Optimizing the resource usage of the examined system needs the observation of the systems<br />

status. The systems status is collected and visualized in so called monitoring systems<br />

(Buhr/Fortier 1995). Monitoring systems do not estimate the systems overall performance<br />

capabilities but the do collect performance values by raising relative systems load data.<br />

Therefore, the target system of measuring runs a special program (which must be designed<br />

very efficient and slim to not disturb the collected data) which reports the systems status to a<br />

central database. The data collector on the target system will report actual resource usage<br />

like CPU or memory utilization, load of the input/output system like storage and network or<br />

even internal statistics like cache hits or misses of database systems. The central database of<br />

the monitoring system will generate an aggregated view to display the systems status but<br />

also offers detailed views to the systems components (Wilhelm 2003).<br />

Observing the systems status while running different tasks can reveal bottlenecks of the<br />

utilized resources – on the other hand it shows the remaining reserves on resources while<br />

handling every days’ workload. Comparing the resource usage for a reference workload<br />

sequence, while not running for maximum throughput, is called baselining (Lucas 1971). It<br />

helps to assure the expected systems behavior after change-processes at the systems<br />

configuration.<br />

As mentioned before, benchmarks are often used by hardware vendors as metrics to solicit<br />

their products, but these benchmarks are often useless for everyday usage. Contrariwise,<br />

individual benchmarks and baseline-values are a good option to provide data for managing<br />

IT-Landscapes. Measuring the resource utilization used to provide the required services is<br />

one step towards the assignment of cost drivers inside the IT infrastructure. On the other<br />

hand, the utilization of resources shows up the efficiency of the IT infrastructure and thus<br />

can be used as key performance indicator (Berler et al. 2005).<br />

Thus, detailed measuring the systems performance is the first step to reveal relevant data for<br />

managing the infrastructure.<br />

82


2 Performance in the field of simulation<br />

Starting with the performance results the next step is the system analysis. Therefore the<br />

measurement results are leading to the field of performance estimation of not yet installed<br />

systems. When talking about service oriented architectures these results may be used for a<br />

performance estimation of business processes consisting of a number of partly composite<br />

services. To estimate the performance of those so called composite applications it is an<br />

important step to measure all the combined services. But following the General Systems<br />

Theory by Ludwig von Bertalanffy, published in 1968, it is not enough to describe all the<br />

parts of a system as “the first step is the study of the system as a whole” (Coarfa et al. 2006).<br />

So, “We are forced to deal with complexities, with “wholes” or “systems” (Coarfa et al.<br />

2006) as ”The constitutive characteristics are not explainable from the characteristics of<br />

isolated parts”.<br />

In (Coarfa et al. 2006) another method for performance analysis is given. The paper states<br />

out that current research is not only dealing with simulation of complex computer systems<br />

but with analyzing them. Analyzing is the first step in simulation on computer systems or<br />

information systems.<br />

Another aspect of current performance research is shown by (Denaro et al. 2004). Here, the<br />

measurement of computer systems is not used for creating a model. It is stated out that “In<br />

the long term and as far as the early evaluation of middlewares is concerned, we believe that<br />

empirical testing may outperform performance estimation models, being the former more<br />

precise and easier to use” (Denaro et al. 2004). The statement shows the current progress in<br />

performance research concerning complex systems. Different research groups try to reach a<br />

similar aim: managing information systems.<br />

To describe systems as a whole on a different level, in research petri nets are widely used as<br />

they have “equal shares of declarative (i.e. place) and functional (i.e. transitions)<br />

subrepresentations” (Fishwick 1992). But as the complexity and the size of the systems are<br />

raising permanently, those descriptions and models become very hard to handle. Simulations<br />

based on these big and complex descriptions are often very hard to be computed. They “can<br />

[…] require fantastically large computing resources” (Nicol 2008). It states that simulations,<br />

once implemented, can “simulate the overall system, but this can be very time consuming”<br />

(Ledeczi et al. 2003).Current research is dealing with making those simulation algorithms<br />

more efficient, so the simulation does not require that many resources. Another opportunity,<br />

making the simulation more practicable, is simplification of the models. Current research is<br />

trying to convert very complex models into smaller ones without loosing any information or<br />

modeled details. (Nicola and Nitsche 2008) Another point of interest in current research is<br />

the power of the modeling languages. It is often impossible to describe the system as a<br />

whole. To get a detailed overview about the systems, researchers simulating the system quite<br />

often so they can have a look at the system from different perspectives in order to get a full<br />

representation of the object. This leads to the point that “developing a single embedded<br />

application involves a multitude of different development tools including several different<br />

simulators. Functional simulators are used to verify that the selected algorithms do indeed<br />

result in the desired system behavior. High-level performance estimators can be used to<br />

obtain early system wide performance numbers. Cycle-accurate simulators are used to obtain<br />

accurate performance estimates for individual system components” (Ledeczi et al. 2003). In<br />

(Matloff 2005) another popular approach to the description of information systems is shown.<br />

It is about the estimation by indirect or secondary data. The approach uses statistical<br />

83


assumptions like Poisson to build estimators to describe one detail of a very complex system<br />

e.g. file-access and modification rates in the internet. The accuracy of these estimations is<br />

another point of interest in current performance research. As those estimators can also work<br />

well when working with non-Poisson web data on which the tests are based on.<br />

3 Performance Aspects in the Context of Service-oriented<br />

84<br />

Architectures (SOA)<br />

Performance can mean different things in different contexts. In general, it is related to<br />

response time (how long does it take to process a request), throughput (how many requests<br />

overall can be processed per unit of time), or timeliness (ability to meet deadlines, i.e., to<br />

process a request in a deterministic and acceptable amount of time) (Drummond 1969).<br />

Comparing former IT architectures with the here discussed service-oriented architecture<br />

shows us new performance aspects. Especially the introduction of new components (e.g. the<br />

SOA Stack) is combined with new uprising challenges. In this special case, we focus on the<br />

performance aspects in context of SOA. Figure one is showing us some performance aspects<br />

that will be discussed later in this chapter.<br />

Application 1 Application 2<br />

SOA<br />

Application 1 Application 2<br />

Stack<br />

data transfer<br />

Figure 1: Performance aspects<br />

- Standards<br />

-Network<br />

- Process Orchestration<br />

- Service Providing<br />

In most cases, performance is negatively affected in SOAs. The architecture should be<br />

carefully designed and evaluated prior to implementation to avoid performance pitfalls<br />

Integrating real-time information from different systems brings real advantages, but they are<br />

offset by the extra time it takes to bring the information together (Gmach et al. 2008; van<br />

Oosterhout et al. 2006).<br />

SOA involves distributed computing. Service and service user components are normally<br />

located in different containers, most often on different machines (Gmach et al. 2008;<br />

Siedersleben 2007). The need to communicate over the network increases the response time.<br />

Typical networks used for SOA, such as the internet, do not guarantee deterministic latency.<br />

Therefore, SOA is not considered a feasible solution for real-time systems where timeliness


is a strict requirement (Colajanni and Yu 2002; Jin and Nahrstedt 2008). SOA presents<br />

challenges for near real-time systems, where latency is not a safety-critical requirement but<br />

rather a business one (i.e., meet business goals) (Yu et al. 2007). For a heavily used service,<br />

many queued requests may already be outstanding, and they are usually serviced in a FIFO<br />

manner. Such a situation can have a significant impact on latency, though it can still be<br />

predicted stochastically. However, if more queue space has to be created dynamically,<br />

latency will be impacted further.<br />

The ability to make services on different platforms interoperate seamlessly has a<br />

performance cost (Jin and Nahrstedt 2008). Intermediaries are needed to perform data<br />

marshalling and handle all communication between a service user and a service provider.<br />

Depending on the SOA technology or framework being used, stubs, skeletons, SOAP<br />

(Simple Object Access Protocol) engines, proxies, and other kinds of elements are in place.<br />

All such intermediaries negatively impact performance (Oosterhout et al. 2006).<br />

Using standard based web services removes much of the complexity and lock-in of earlier<br />

integration platforms, but existing infrastructure is ill-equipped to handle the extra overhead.<br />

The performance (response time) of web services is typically worse than the bare underlying<br />

functionality. This is due to and compounded by the overheads of XML (e.g. delivery,<br />

parsing, validation, serialization), the implementation of composite services (e.g. use of other<br />

services including 3rd party services and legacy applications), service orchestration<br />

(infrastructure and protocols), service invocation (e.g. security, transports), resources<br />

(physical or software resources, e.g. CPUs, threads), and resource models (e.g.<br />

virtualization). Optimally allocating resources to services is also difficult, as resources may<br />

not be correctly distributed, and individual services in a workflow for which there is high<br />

demand, may become bottlenecks due to resource constraints (Zeng et al. 2004).<br />

Using orchestration and services can affect the performance of the total system. Nowadays<br />

BPEL (Business Process Execution Language) mainly is used for defining the orchestration,<br />

WSDL (Web Service Description Language) for defining the service interface and SOAP for<br />

defining the messages. All these standards are XML based (Papazoglou and van den Heuvel<br />

2007). This makes them ‘human-readable', but also creates a lot of overhead for system-tosystem<br />

communication. Imagine the difference in performance between a Java function call<br />

(which is compiled into byte code) and a service invoke sending a SOAP message over<br />

HTTP. Conclusion: the higher the granularity, the more SOAP calls are needed. More SOAP<br />

calls means a lower performance (Winkler 2007; Yu et al 2007).<br />

85


86<br />

Figure 2: Overview of issues in service identification<br />

Figure 2 summarizes the issues in service identification. Neither try to see relations between<br />

distances and curves nor search for scientific foundations, the Figure just attempts to clarify<br />

how they relate. On the x-axis the granularity is shown. From left to right the granularity<br />

increases meaning that the services become smaller. On the y-axis the four issues are shown.<br />

Low and high complexity, flexibility and performance are straightforward. By a high reuse<br />

the percentage of services that are used in more than one orchestration is meant.<br />

4 Performance in the context of ERP systems<br />

Enterprise Resource Planning (ERP) systems are an essential part of the infrastructure to run<br />

and support a company’s business processes. Hence, it is important that the performance of<br />

such ERP systems is suitable and does not affect working capabilities of the users, e.g. in<br />

waiting for the system situations. Many works have focus on such influences – they all<br />

focused on the user performance in the ERP system, which actually means: how does the<br />

user perform in the ERP system. Beside this performance understanding there is another<br />

performance which is not considered extensively in the literature so far. This is the<br />

performance of the ERP system itself.<br />

Before the performance of an ERP system may be discussed, we should consider the<br />

architecture of an ERP system. The ERP system may be described as a huge software<br />

system, consisting of several different software components. After the idea of service<br />

oriented architectures becomes more and more reality, the complexity and intricacy of ERP<br />

system grow significantly. On the one hand this is because of the distribution of software<br />

components in several software landscapes and on the other hand because of the<br />

heterogeneity of the components. This complicates performance measurements of ERP<br />

systems.<br />

Beside this we should consider another trend which has consequences on ERP system:<br />

virtualization. Virtualization was firstly introduced in the IBM mainframes world as the early


VM/370 systems were developed (Creasy 1981). The theoretical fundament of virtualization<br />

was developed by Popek and Goldberg (Goldberg 1973 and Popek et. al. 1974). The theory<br />

mentioned a mapping component, which is responsible to map virtual resources to real<br />

existing resources. In today’s virtualization solution this mapping work is done by an<br />

additional software component, called the hypervisor. After running dry in mainframe’s<br />

world in the 1980s, in the 1990s a software company called VMware started the<br />

virtualization trend again. This time they applied virtualization not to the mainframe world<br />

but to the personal computer world (Walters 1999). Today a lot more virtualization solutions<br />

in the near of VMware and conceptually far away from VMware are available on the market.<br />

The importance of such virtualization technique grows enormously (InfoTech 2007). With<br />

the spreading of virtualization, different types of virtualization were developed to satisfy<br />

special requirements (full virtualization, paravirtualization as well as hybrid virtualization<br />

solutions). So when coupling this virtualization trend with the complexity and intricacy of<br />

ERP system, the question is how does virtualization influences the performance of ERP<br />

systems?<br />

Often the hypervisor is a starting point for extensive performance measurement tests:<br />

Apparao et. al. 2006, Barham et. al. 2003, Bhargava et. al. 2008, Cherkasova et. al. 2005,<br />

Huang et. al. 2006, Young 2006, Matthews et. al. 2007, Menon et. al .2005, Ongaro et. al.<br />

2008, Soltesz et. al. 2007, Whitaker et. al. 2002, Youseff et. al. 2006. Till today different<br />

performance measurement projects focused on the hypervisor and tried to determine the<br />

influences on the performance of software or hardware running on top of the hypervisor by<br />

testing the hypervisor indirectly. It is not possible today to test the hypervisors’ mapping<br />

functionality directly. In fact, the mentioned papers focused on very special performance<br />

tests:<br />

� Whitaker et. al. 2002 focused on the performance of a self-developed<br />

paravirtualization solution by applying some throughput benchmarks to their<br />

system.<br />

� Apparao et. al. 2006 characterized the network processing overhead in a<br />

paravirtualization solution be using a TCP/IP performance benchmark.<br />

� Barham et. al. 2003 used SPEC INT2000, SPEC WEB99 and some other<br />

benchmarks to evaluate the performance of their paravirtualization solution.<br />

� Bhargava et. al. 2008 had a look onto the memory management of a<br />

virtualization solution by comparing different memory benchmark results.<br />

� Cherkasova et. al. 2005 focused on the CPU overhead for I/O processing in a<br />

paravirtualization solution. They used an I/O benchmark for this.<br />

� Huang et. al. 2008 analyzed the performance overheads in a paravirtualization<br />

solution for a high performance computing environment.<br />

Till today nobody focused on more complex software systems, like ERP systems.<br />

A common accepted way to measure the performance of an ERP system is by applying<br />

benchmarks to the system. These benchmarks measure the capability of the ERP system to<br />

handle massive data load and a growing set of users. One example for such a benchmark is<br />

the SAP SD benchmark. Hardware as well as software vendors use this benchmark to<br />

demonstrate the performance of their hardware or software (like operating system vendors).<br />

Of course there are some commercial tests available, where virtualization was used. For<br />

example: we discovered 14 certified SAP SD benchmark results where the virtualization<br />

87


solution from VMware was used. But unfortunately there are no direct comparisons between<br />

a virtualized and a non-virtualized SAP system available.<br />

So the main question is now: what happens to the performance of such huge software<br />

systems, when applying a new software layer (hypervisor) to the architecture of the system<br />

compared to the same ERP system without this additional software layer? Does it influence<br />

the performance of such systems positively or does it have a negative impact on the<br />

performance? From the papers above we know that virtualization always have an impact –<br />

no matter if positively or negatively. Following this question our expectations would be as<br />

follows:<br />

� Performance of ERP systems is partly positively affected by virtualization<br />

techniques – Simple experiments show, that additional buffers can accelerate the<br />

performance of ERP systems.<br />

� Overall performance (especially when dealing with extensive performance tests)<br />

will be decreased by virtualization.<br />

� We expect a scalability effect where the hypervisor is not able to handle<br />

increasing load fairly.<br />

� There may be other impacts o the ERP system that we cannot know yet.<br />

88


Bibliography<br />

Apparao, P., Makineni, S., Newell, D., Characterization of network processing overheads in Xen. In:<br />

Proceedings of the Second International Workshop on Virtualization Technology in<br />

Distributed Computing, 2006<br />

Barham, P., Dragovic, B., Fraser, K., Hand, S., Harris, T., Ho, A., Neugebauer, R., Pratt, I., Warfield,<br />

A., Xen and the Art of Virtualization. In: SOSP ’03 - Proceeding of the nineteenth ACM<br />

symposium on Operating systems principles, Pages 164-177, 2003Berler, A.; Pavlopoulos, S.;<br />

Koutsouris, D. (2005): Using Key Performance Indicators as Knowledge-Management Tools<br />

at a Regional Health-Care Authority Level. In: IEEE TRANSACTIONS ON INFORMATION<br />

TECHNOLOGY, (2005).<br />

Buhr, P. A.; Fortier, M. (1995): Monitor Classification. In: ACM Computer Surveys, Vol. Vol 27, No.<br />

1 (1995), S. 45.<br />

Cherkasova, L., Gardner, R., Measuring CPU Overhead for I/O Processing in the Xen Virtual Machine<br />

Monitor. In: Proceeding of the annual conference on USENIX Annual Technical Conference,<br />

Pages: 387-390, 2005<br />

Cherkasova, L.; Gupta, D.; Amin, V. (2007): Comparison of the Three CPU Schedulers in Xen. In<br />

SIGMETRICS Performance Evaluation Review 25 (2), 42-51.<br />

Coarfa, C.; Druschel, P.; Wallach D. (2006): Performance Analysis of TLS Webserver. In: ACM<br />

Transactions on Computer Systems, Vol. 24, No. 1, 39-69.<br />

Colajanni, M.; Yu, P. S. (2002): The state of the art in locally distributed Web-server systems. In:<br />

ACM Computing Surveys, Vol. 34, No. 2, 263–311.<br />

Denaro, G.; Polini, A.; Emmerich W. (2004): Early Performance testing in distributed software<br />

applications, In: WOSP'04: Proceedings of the 4th international workshop on Software and<br />

Performance<br />

Drummond, M. E. Jr.(1969): A perspective on system performance evaluation. In IBM Systems<br />

Journal, Vol. 4, 252-263<br />

Fishwick, P (1992): An integrated approach to system modeling using synthesis of artificial<br />

Intelligence, Software Engineering and Simulation Methodologies. In: ACM Transactions on<br />

Modeling and Computer Simulation, Vol. 2, No. 4, 307-330<br />

Gmach, D.; Krompass, S.; Scholz, A.; Wimmer, M.; Kemper, A. (2008): Adaptive Quality of Service<br />

Management for Enterprise Services. In: ACM Transactions on the Web, Vol. 2, No. 1, Article<br />

8.<br />

Hoetzel, A.; Benhaim, A.; Griffiths, N.; Holliday, C. (1998): Benchmarking in Focus, IBM 1998.<br />

Infotech,<br />

http://infotech.indiatimes.com/Enterprise/Infrastructure/SW__HW__Peripherals/Virtualisation<br />

_to_drive_IT_infrastructure_Gartner_/articleshow/2035141.cms, accessed on 12/2/2007<br />

John, L. K.; Eeckhout, L. (2006): Performance Evaluation and Benchmarking, CRC-Press, Boca<br />

Raton 2006.<br />

Jun, J.; Nahrstedt, K.: QoS-Aware Service Management for Component-Based Distributed<br />

Applications. In: ACM Transactions on Internet Technology, Vol. 8, No. 3, Article 14.<br />

Kunkel, S. R.; Eickemeyer, R. J.; Lipasi, M. H.; Mullins, T. J.; O’Krafka, B.;Rosenberg, H.;<br />

VanderWiel, S. P.; Vitale, P. L.; Whitley, L. D. (2000): A performance methodology for<br />

commercial servers. In IBM Research Journal, Vol. 44 No. 6, 851-872.<br />

Lacey, S.; Macfarlane, I. (2002): The Office of Government Commerce (2002): Service support: ITIL<br />

managing IT services. 6. Aufl.,, TSO (The Stationery Office), London 2002.<br />

Ledeczi, A.; Davis, J.; Neema, S.; Agrawal, A. (2003): Modeling Methodology for Integrated<br />

Simulation of Embedded Systems. In: ACM Transactions on Modeling and Computer<br />

Simulation, Vol. 13, No. 1, 82-103<br />

Lilja, D. J. (2000): Measuring Computer Performance - A practitioner's guide, Cambridge University<br />

Press 2000.<br />

Lucas, H. C. J. (1971): Performance Evaluation and Monitoring. In: ACM Computer Surveys, Vol.<br />

Vol. 3, No. 3, (1971), S. 13.<br />

89


Martorell, X.; Smeds, N.; Brunheroto, J. R.; Almási, G.; Gunnels, J. A.; DeRose, L.; Labarta, J.;<br />

Escalé, F.; Giménez, J.; Servat, H. Moreira, J. E. (2005): Blue Gene/L performance tools. In<br />

IBM Research & Development, Vol. 49 No. 2/3, 407-424.<br />

Matloff, N. (2005): Estimation of Internet file-access/modification rates from indirect data. In: ACM<br />

Transactions on Modeling and Computer Simulation, Vol. 15, No. 3, 233-253<br />

Morris, R. J. T.; Truskowski, B. J. (2003): The evolution of storage systems. In IBM Systems Journal,<br />

Vol. 42 No. 2, 205-217.<br />

Nicol, D. (2008): Efficient Simulation of Internet Worms. In: ACM Transactions on Modeling and<br />

Computer Simulation, Vol. 18, No. 2, Article 5<br />

Nicola, T.; Ultes-Nitsche, U. (2008): Weakly Continuation Closed Homomorphisms on Automata. In:<br />

Ultes-Nitsche, U.; Moldt, D.; Augusto, J. C. (Eds.): Proceedings of the 6 th International<br />

Workshop of Modelling, Simulation, Verification and Validation of Enterprise Information<br />

Systems – MSVVEIS, Barcelona, 39-48.<br />

Ongaro, D., Cox, A. L., Rixner, S., Scheduling I/O in virtual machine monitors. In: Proceedings of the<br />

fourth ACM SIGPLAN/SIGOPS international conference on Virtual execution environments,<br />

2008<br />

Papazoglou, M. P.; van den Heuvel W.-J. (2007): Service oriented architectures: approaches,<br />

technologies and research issues. In: The VLDB Journal — The International Journal on Very<br />

Large Data Bases; Volume 16, Issue 3, 389 – 415.<br />

Perlis, A.; Sayward, F.; Shaw, M. (1981): Software Metrics: An Analysis and Evaluation,<br />

Massachusetts Institute of Technology, Cambridge, Massachusetts 1981.<br />

Saavedra, R. H.; Smith, A. J. (1996): Analysis of Benchmark Characteristics and Benchmark<br />

Performance Prediction. In ACMTransactions on Computer Systems, Vol. 14 No. 4, 344-384.<br />

Schmietendorf, A.; Dumke, R.R. (2003): 4. Workshop Performance Engineering in der<br />

Softwareentwicklung (PE 2003). Paper presented at the PE 2003, Magdeburg.<br />

Siederleben, J. (2007): SOA revisited: Komponentenorietierung bei Systemlandschaften. In:<br />

<strong>Wirtschaftsinformatik</strong>, Jahrgang 49, Sonderheft, 110-117.<br />

Tetzlaff, W. H.; Buco, W. M. (1964): VM/370, Attached Processor, and multiprocessor performance<br />

study. In IBM Systems Journal, Vol. 23 No. 4, 375-385.<br />

Van Oosterhout, M.; Waarts, E.; van Hillegersberg, J. (2006): Change factors requiring agility and<br />

implications for IT. In: European Journal of Information Systems (2006) 15, 132–145.<br />

Weyuker, E. J.; Avritzer, A. (2002): A metric for predicting the performance of an application under a<br />

growing workload. In IBM Systems Journal, Vol. 41 No. 1, 45-54.<br />

Whitaker, A., Shaw, M., Gribble, S. D., Scale and performance in the Denali isolation kernel. In:<br />

Proceedings of the 5th symposium on Operating system design and implementation, 2002<br />

Wilhelm, K. (2003): Messung und Modellierung von SAP R/3- und Storage-Systemen <strong>für</strong> die<br />

Kapazitätsplanung 2003.<br />

Winkler, V. (2007): Identifikation und Gestaltung von Services - Vorgehen und beispielhafte<br />

Anwendung im Finanzdienstleistungsbereich. In: <strong>Wirtschaftsinformatik</strong>, Jahrgang 49, 4, 257–<br />

266.<br />

Wisniewski, R. W.; Rosenburg, B. (2003): Efficient, Unified, and Scalable Performance Monitoring<br />

for Multiprocessor Operating Systems. In Proceedings of the ACM/IEEE SC2003.<br />

Yu, T.; Zhang, Y.; Lin, K.-J. (2007): Efficient Algorithms for Web Services Selection with End-to-<br />

End QoS Constraints. In: ACM Transactions on the Web, Vol. 1, No. 1, Article 6.<br />

Zeng, L.; Benatallah, B.; Ngu, A.; Dumas, M.; Kalagnanam, J.; Chang, H. (2004): Quality-aware<br />

middleware for Web service composition. In: IEEE Transaction Software Engineering. 30, 5,<br />

311–327.<br />

90


Introduction<br />

Workflow Management in Healthcare<br />

Mauro, C.; Mayer, M.; Rathmayer, M.; Sunyaev, A.<br />

Literature Review<br />

Technische Universität München<br />

<strong>Lehrstuhl</strong> <strong>für</strong> <strong>Wirtschaftsinformatik</strong> (I 17)<br />

Boltzmannstr. 3 – 85748 Garching<br />

mauro, manuel.mayer, sunyaev@in.tum.de<br />

m.rathmayer@meierhofer.de<br />

In the area of healthcare workflow management systems (WfMS) are utilized as in other industrial<br />

areas to automate business processes, e.g. the flow of patient records, patient billing, resource utilization<br />

and other processes which need information from the overall treatment process (Murray 2003).<br />

Clinical Guidelines, Clinical Pathways and Evidence-based Medicine are terms which are closely<br />

connected to workflows in the area of healthcare, i.e. they describe the non-IT aspects of Workflow<br />

Management.<br />

From the organizational point of view workflows in the area of healthcare can be found in three areas:<br />

Intersectoral workflows are workflows between the practitioners and the hospitals. Interdisciplinary<br />

workflows are workflows between departments within the hospitals. And finally there are workflows<br />

within departments and special units for example the operating room or the radiological department,<br />

which depend more or less directly from the role of the performer.<br />

Practitioner<br />

Research Landscape<br />

Three major research areas can be identified:<br />

Inter-sectoral<br />

Figure 1: Workflows in Healthcare (Source: Own figure)<br />

• Methods for modelling healthcare workflows<br />

Inter-departmental<br />

Intradepartmental<br />

Clinician<br />

91


92<br />

• Technical aspects of healthcare WfMS<br />

• Implementation of workflow management<br />

Figure 2 illustrates the three areas of our research landscape. Additionally we made a subclassification<br />

within these areas. The next chapters address selected parts of the landscape. The focus<br />

is on modelling methods, interoperability standards and device integration.<br />

Architecture<br />

Technical Aspects<br />

of Healthcare WfMS<br />

Interoperability<br />

Standards<br />

Device<br />

Integration<br />

Frameworks<br />

Modelling of Healthcare Workflows<br />

Adaptive<br />

Workflow<br />

s<br />

Methods for Modeling<br />

Healthcare Workflows<br />

Implementation<br />

of Healthcare Workflows<br />

Problem reports<br />

and guidelines<br />

Reengineering<br />

Ontology<br />

based Resourceconstraint<br />

Modeling<br />

Process<br />

Quality and Cost<br />

Measurement<br />

Figure 2: Research landscape (Source: Own figure)<br />

Modelling and improvement of healthcare workflows is found as an organizational learning process.<br />

Therefore it is both cost-intensive and time consuming to continuously create and refine healthcare<br />

workflows.<br />

Lin et al. (2000) are using data mining to re-engineer healthcare workflows. This method is adapted<br />

from non-healthcare, means industrial experiences and is mainly focused to find the time dependency<br />

patterns within the workflows. In Lin, Hsieh and Pan (2005) the same research group published an<br />

approach to continuously improve already implemented workflows analysing electronic patient records<br />

(EPR) using an available stochastic method. Xing et al. (2007) describe a more general approach<br />

introducing an algorithm mining workflows from event logs.<br />

Sutherland and van den Heuvel (2006) introduce another approach. They adapt workflows while<br />

documenting activities and alert if any activities are forgotten or a combination of activities is conflicting<br />

with known rules. In that case the approach is introduced for the operation room. It should be<br />

investigated whether it can also be used in other areas of healthcare workflows, e.g. intensive or normal<br />

care.


Browne, Schrefl and Warren (2004) introduce the approach of self-modifying workflows. They provide<br />

a set of operation to be applied to a workflow schema, which turn abstract sub-workflows into<br />

concrete ones through completion and alteration of template primitives.<br />

Wang et al. (2007) describe their theoretical approach of a resource-constrained workflow modelling<br />

approach, which is stated to be used for an emergency response system workflow management. This<br />

is one of those special areas in healthcare where different workflow management system must be<br />

implemented. There are other special areas e.g. outpatient treatment where an investigation would be<br />

interesting, whether there are differences to “normal” healthcare workflows.<br />

Because workflow implementation and measuring improvement finally leads to benchmarking between<br />

different hospitals and their differences in workflow implementation the mapping between<br />

different workflow models will become more interesting. Haller, Oren and Kotinurmi (2006) address<br />

this problem by introducing their multi meta-model process ontology. And as a practical input they<br />

show the mapping by “translating an IBM Websphere MQ Workflow model into the m3po ontology<br />

and subsequently extracting an Abstract BPEL model from the ontology”.<br />

An interesting aspect of modelling is also the problem, that environmental conditions (e.g. staff<br />

changes, room changes, etc.) change during long-running workflows while the execution is still authorized.<br />

Shafiq et al. (2006) describe the problem “to provide the right data to the right person at the<br />

right time” which leads to the approach of dynamic adaptation of long-running workflows.<br />

Implementation of Workflows<br />

Murray (2003) describes in a multiple case study the implementation of a commercially available<br />

healthcare workflow system in two hospitals settings. He concludes, that even though the technology<br />

is receiving broad acceptance, there are many barriers to its implementation. The change from manual-based<br />

workflow to computer-based workflow is stated as more complex than expected. But there<br />

is no quantitative measurement of key factors. An investigation in this area would be very interesting,<br />

because there are very few case studies in Germany, too.<br />

Interoperability Standards<br />

Standardized communication enables the exchange of data between all participating healthcare information<br />

systems (Healthcare Information and Management Systems Society 2006, 2). Interoperability<br />

processes in healthcare and the medical market require generally accepted standards (Kuhn et al.<br />

2006) in order to achieve the following: efficient and effective combination of distributed medical<br />

systems, increase competition and reduce costs, easily update or replace healthcare IT products, and<br />

reduce errors to make healthcare services safer (WHO 2006; CEN/TC 251 European Standardization<br />

of Health Informatics). IT standards in the healthcare sector can be divided into two groups: syntactical<br />

and semantic standards. For an overview of current and worldwide accepted healthcare IT standards,<br />

see Figure 3. Sunyaev (2008) provides a comprehensive discussion of accepted IT standards in<br />

healthcare.<br />

93


Figure 3: Medical Communication and Documentation Standard (Source: Own figure)<br />

The correct transmission of medical and administrative data between heterogeneous and distributed<br />

medical information systems is based on syntactical standards. These are mainly HL7/CDA (Health<br />

Level Seven Inc. Ann Arbor MI 2005; Dolin et al. 2001; Dolin et al. 2006), DICOM (NEMA 2005)<br />

and EDIFACT (EDIFACT 2006). The German standard xDT, promoted by the National Association<br />

of Statutory Health Insurance Physicians (KBV (Kassenärztliche Bundesvereinigung Deutschland)<br />

2005) focuses on data transfer in the German healthcare system.<br />

Semantic standards on the other hand ensure correct interpretation of the content of the electronically<br />

exchanged data. Established standards are LOINC (Logical Observation Identifiers Names and Codes<br />

(LOINC®) 2006), SNOMED (SNOMED International 2005), ICD (DIMDI 2006, WHO, 2006 #589),<br />

MeSH (UMLS 2006), and UMLS (UMLS 2006).<br />

The current standardization situation in the healthcare sector has been described as being unsatisfactory<br />

(Institute of Medicine of the National Academies 2001). Until there is a generally accepted standardization<br />

of syntactic as well as semantic standards, all advantages listed previously cannot be exploited<br />

(Haux 2005). To improve health services, reduce costs, and gain the vision of seamless healthcare,<br />

the problem of standardized software solutions must first be solved. As it can be assumed that<br />

there will not be a global standard for the integration of semantic and syntactic data, an integration<br />

solution has to be capable of dealing with different established standards. The complete interoperability,<br />

the consent identification concerning the meaning of the medical data and meaningful cooperation<br />

of functions, are the primary ones of the main challenges for the near future of development of healthcare<br />

information systems (Frist, 2005).<br />

There are many different approaches to standardize the electronic communication and documentation<br />

in healthcare to achieve the interoperability. Some of the promising standardization initiatives are<br />

summarized in Table 1. The analysis described in Table 1 shows that the existing standardisation<br />

approaches for interoperability in healthcare can be divided in two opposing groups (Märkle &<br />

Lemke, 2002). On the one hand, there is basically industry that propagates and adopts proprietary<br />

standards like HL7 and DCOM. On the other hand, there are generally university- and state-funded<br />

initiatives which use and support open standards as initiated by European Committee of Standardization<br />

(CEN/TC 251 European Standardization of Health Informatics). This difference is also based<br />

regionally - the former are mainly located in the U.S., whereas the latter initiatives come from Europe.<br />

94


For the future healthcare system, it remains to be aspired that both camps learn from each other and<br />

may grow together, and with it, the vision of complete interoperability in the healthcare sector can<br />

become true. Nevertheless, they all provided important issues and contributed indispensable knowledge<br />

in attaining the present status.<br />

Architecture of Workflow Management Systems in healthcare information systems<br />

The term architecture in the area of workflow management is defined by Jablonski 1996. Subsequently<br />

Becker and zur Mühlen investigated the architecture of WfMS (Becker et al. 2002) and discussed<br />

the granularity of those systems in Rocks, Stones and Sand in (zur Muehlen/Becker 1999).<br />

Today most workflow management systems are based on standard workflow engines. Haux et al.<br />

(2003) and Reinhardt (2007) describe the architecture of Sorian and Lechleitner et al. (2003) the architecture<br />

of Cerner Millenium, but they do not work out whether there are specific requirements regarding<br />

the architecture for healthcare information systems, which support workflow management.<br />

95


Table 1: Overview of Current Approaches for Integrating the Digital Healthcare Enterprise (Source: Own figure)<br />

Initiatives for information<br />

integration in<br />

healthcare<br />

Integrating the healthcare<br />

enterprise (IHE, Hornung/Goetz/Goldschmidt<br />

2005)<br />

Professionals and citizens<br />

network for integrated care<br />

(PICNIC, Danish Center for<br />

Health Telematics 2003)<br />

Open electronic health<br />

record (openEHR, openEHR<br />

2004; The openEHR foundation<br />

2006)<br />

Standardization of communication<br />

between information<br />

systems in physician’s<br />

offices and hospitals using<br />

XML (SCIPHOX, Standardized<br />

Communication of<br />

Information Systems in<br />

Physician Offices and<br />

Hospitals using XML (SCI-<br />

PHOX) 2006; Gerdsen et al.<br />

2005)<br />

www.akteonline.de (Ückert<br />

et al. 2002; akteonline.de<br />

2006; Schwarze et al. 2005)<br />

96<br />

Characterization and Goal Approach and assignment<br />

of internationally accepted<br />

standards<br />

IHE is an international initiative,<br />

driven by healthcare professionals<br />

and industry, to improve the communication<br />

of medical information<br />

systems and the exchange of data.<br />

PICNIC is a European project of<br />

regional healthcare providers in a<br />

public private partnership with<br />

industry to develop the new healthcare<br />

networks and to defrag the<br />

European market for healthcare<br />

telematics.<br />

OpenEHR foundation is dedicated to<br />

develop an open specification and<br />

implementation for the electronic<br />

health record (EHR). OpenEHR<br />

advances the experiences of GEHRprojects<br />

(GEHR 2006; Blobel 2006)<br />

in England and Australia.<br />

SCIPHOX is a German initiative<br />

with the aim to define a new common<br />

communication standard for<br />

ambulant and inpatient healthcare<br />

facilities.<br />

akteonline.de is German state-aided<br />

project to develop a Web-based<br />

electronic healthcare record.<br />

Service Oriented Medical Device Integration<br />

IHE’s approach for the information<br />

integration is based on propagation<br />

and integration/usage of HL7 and<br />

DICOM standards. IHE promotes and<br />

advances these standards as a suggestion<br />

for standardizing bodies.<br />

The development is an open source<br />

model, i.e. an open and interoperable<br />

architecture with exchangeable<br />

components (aim is an easy and<br />

simple integration of external products).<br />

All components must be based<br />

on established standards, such as<br />

HL7/CDA.<br />

The project works closely with<br />

standards (e.g. HL7). However, it<br />

does not adopt them verbatim but<br />

tests, implements and improves their<br />

integration and application while<br />

giving feedback to the standard<br />

bodies.<br />

The basis for the information exchange<br />

is the XML-based HL7/CDA<br />

standard. SCIPHOX adapts and<br />

improves this global standard for<br />

local (German) needs.<br />

akteonline.de developed dynamic<br />

Web pages, which can be accessed<br />

via internet and look similar to physicians<br />

and hospital software. The<br />

project is based on the common<br />

communication standards (DICOM<br />

and HL7/CDA).


Medical devices are playing a decisive role in the process of medical treatment. In the context of<br />

workflows, devices must be integrated in a way that WfMS can handle them. Thus the interoperability<br />

of medical devices is an important factor. A definition of interoperability in the context of medical<br />

devices is given in Lesh et al. (2007) and will be discussed in the following sub-chapter. Afterwards<br />

the state of the art concerning the service oriented integration of (medical) devices will be presented<br />

and research gaps will be identified.<br />

Device Interoperability<br />

Lesh et al. (2007) base their definition of interoperability on complexity (fig. 4). Least complex interoperability<br />

addresses physical connectivity. Most complex interoperability addresses communication,<br />

which is defined as “the act or process of transmitting information” (Merriam-Webster 2002), i.e.<br />

dealing with syntax and semantic. For service oriented device integration the most complex end of the<br />

interoperability continuum is necessary, because devices will be interpreted as business units and they<br />

provide their functionality as service.<br />

Device Interoperability (Source: Based on Lesh et al. (2007, 2))<br />

Figure 4:<br />

Lesh et al. (2007) are listing 23 potential benefits of having interoperable medical devices, e.g. improved<br />

quality of care and better workflow support. However “interoperability is an almost nonexistent<br />

feature of medical devices” (Lesh et al. 2007, 1). The reasons are manifold, e.g. there is no<br />

incentive for device manufactures to interoperate with other manufactures’ devices. Several consorts<br />

are trying to fix this issue by developing recommendations for standardized device communication,<br />

e.g. the Medical Device Plug and Play program (MD PnP) 1 , Continua Health Alliance (Continua) 2 and<br />

Integrating the Healthcare Enterprise (IHE) 3 . But these approaches as well as several current publications<br />

(e.g. Zitzmann and Schumann (2007), Garguilo et al. (2007) and Hotchkiss, Robbins and Robkin<br />

(2007)) are not addressing a service oriented integration of medical devices.<br />

Service Oriented Device Integration<br />

In the healthcare domain only one publication can be found that addresses the service oriented integration<br />

of medical devices. It is published by employees of Drägerwerk AG and Dräger Medical AG<br />

respectively (Strähle et al. 2007). They try to interconnect modalities of an anaesthesia workplace<br />

(ZEUS, Dräger Medical) and an endoscopic surgery workplace (OR1 KarlStorz). Devices are plugged<br />

on a Web Application Server which can interpret the proprietary device protocol and offers their functionalities<br />

as web services.<br />

1 http://www.mdpnp.org<br />

2 http://www.continuaalliance.org<br />

3 http://www.ihe.net<br />

97


In the non-medical domain some important projects can be found concerning the service oriented<br />

device integration. First of all the European research project “SIRENA” 4 started in 2003 with the<br />

objective to develop a service infrastructure for real time embedded networked applications. Four<br />

main application domains are represented: Industrial, Home, Automotive and Telecommunications.<br />

The specific domain requirements were collected and existing device-level SOA technologies were<br />

analyzed (OSGi, HAVi, JINI, UPnP and DPWS). Finally Microsoft’s DPWS (Devices Profile for<br />

Web Services) 5 was the technology that perfectly met the requirements (Bohn/Bobek/Golatowski<br />

2006). Within the project the world’s first DPWS implementation for embedded devices was delivered<br />

(Jammes/Mensch/Smit 2007). In the context of manufacturing processes the manufacturing services<br />

(provided by SOA-enabled devices) could be orchestrated (Jammes et al. 2005). Several benefits of<br />

the SIRENA approach were promoted: Interoperability, scalable service composability and aggregation,<br />

uncoupling of logical and physical aspects, plug-and-play connectivity, seamless integration with<br />

enterprise networks, integration with legacy technology and simplified application development<br />

(Jammes/Smit 2005). When the project finished in 2006 the WS4D (Web Services for Devices) 6 initiative<br />

was formed to follow up the development of different DPWS toolkits. In addition the follow-up<br />

projects SODA (Service Oriented Device and Delivery Architecture) 7 and SOCRADES 8 (Service-<br />

Oriented Cross-Layer Infrastructure for Distributed Smart Embedded Systems) were founded. SODA<br />

focuses on the creation of “a comprehensive, scalable, easy to deploy ecosystem built on top of the<br />

foundations laid by the SIRENA project” (de Souza et al. 2008, 2) while SOCRADES “aims to further<br />

design and implement a more sophisticated infrastructure of web-service enabled devices” (de Souza<br />

et al. 2008, 2), i.e. also the ERP integration is addressed (Karnouskos et al. 2007).<br />

A project referring to the healthcare domain is the Eclipse Open Health Framework (OHF), including<br />

the sub-project SODA (Service-Oriented Device Architecture) 9 . An OSGi based architecture is proposed,<br />

but except a high level description (de Deugd et al. 2006) and some slides presented at eclipseCON<br />

2007 and eclipseCON 2008 no further publications are available.<br />

Identified Research Gaps<br />

Existing works (e.g. de Deugd et al. (2006) and Strähle et al. (2007)) about service oriented device<br />

integration often present quick solutions without analyzing the requirements of the specific domain<br />

and without consideration of existing projects or publications. Also possibly used research methods<br />

are not mentioned. An exception is SIRENA and its follow-up projects. The requirements of selected<br />

domains were analyzed and existing technologies for device integration were reviewed. Unfortunately<br />

the requirements for a service oriented integration of medical devices were not analyzed. Because of<br />

that it is not clear if the proposed solutions can be directly transferred to the healthcare domain. Therefore<br />

the next step for further research must be a complete requirements analysis and afterwards an<br />

analysis of exiting solutions concerning their suitability.<br />

Conclusion<br />

The area of workflow management is a broad field. Within this article only selected aspects could be<br />

addressed in detail. Many articles about different modelling requirements and their possible solutions<br />

4<br />

http://www.sirena-itea.org<br />

5<br />

http://www.microsoft.com/whdc/connect/rally/rallywsondevices.mspx<br />

6<br />

http://www.ws4d.org<br />

7<br />

http://www.soda-itea.org<br />

8<br />

http://www.socrades.eu<br />

9<br />

http://www.eclipse.org/ohf/components/soda<br />

98


were found. The different modelling requirements indicate that the area of healthcare workflow is<br />

complex and will be still interesting in the future because only a few of the shown solutions are implemented<br />

and the outcome is not proved. The area of architecture of WfMS depends from the modelling<br />

area. So if there is still modelling research running, this indicates also that there will be changes<br />

in architecture as well. Maybe, standard workflow architecture as introduced today in healthcare will<br />

have to change in the future. In the area of implementation of WfMS only a few case studies could be<br />

found. Therefore investigations in this area would be very interesting and could turn out new aspects<br />

for research.<br />

It was shown that interoperability is an important factor in the healthcare domain. A number of approaches<br />

exist for integrating the digital healthcare enterprise, but as it seems, none of them became<br />

widely accepted. As presented especially the integration of heterogeneous medical devices is an unsolved<br />

problem. Service oriented integration seems to be a promising approach, but the associated<br />

research is only at the beginning.<br />

References<br />

akteonline.de (2006): akteonline.de - eine persönliche elektronische Gesundheitsakte. www.akteonline.de, zugegriffen<br />

am 10.05.2006.<br />

Becker, J.r.; zur Muehlen, M.; Gille, M.; Fischer, L. (2002): Workflow Application Architectures. Classification<br />

and Characterstics of Workflow-Based Information Systems. In: Worflow Handbook 2002. Future<br />

Strategies. Hrsg. Future Strategies Inc., 2002, S. 39-50.<br />

Blobel, B. (2006): Advanced and secure architectural EHR approaches. In: International Journal of Medical<br />

Informatics, Vol. 75 (2006), S. 185—190.<br />

Bohn, H.; Bobek, A.; Golatowski, F. (2006): SIRENA - Service Infrastructure for Real-time Embedded Networked<br />

Devices: A service oriented framework for different domains. Paper presented at the International<br />

Conference on Networking, International Conference on Systems and International Conference<br />

on Mobile Communications and Learning Technologies (ICN/ICONS/MCL 2006).<br />

Browne, E.D.; Schrefl, M.; Warren, J.R. (2004): Goal-focused self-modifying workflow in the healthcare<br />

domain. Paper presented at the System Sciences, 2004. Proceedings of the 37th Annual Hawaii International<br />

Conference on, S. 10 pp.-10 pp.<br />

CEN/TC 251 European Standardization of Health Informatics (2005): CEN/TC 251-Webseite.<br />

http://www.centc251.org/, zugegriffen am 15.12.2005.<br />

Danish Center for Health Telematics (2003): PICNIC (Professionals and Citizens Network for Integrated<br />

Care)-Webseite. http://www.medcom.dk/picnic, zugegriffen am 15.12.2005.<br />

de Deugd, S.; Carroll, R.; Kelly, K.E.; Millett, B.; Ricker, J. (2006): SODA: Service Oriented Device Architecture.<br />

In: Pervasive Computing, IEEE, Vol. 5 (2006) Nr. 3, S. 94-96.<br />

de Souza, L.M.S.; Spiess, P.; Guinard, D.; Köhler, M.; Karnouskos, S.; Savio, D. (2008): SOCRADES: A<br />

Web Service based Shop Floor Integration Infrastructure. The Internet of Things. First International<br />

Conference (IOT 2008).<br />

DIMDI (2006): Deutsches Institut <strong>für</strong> medizinische Dokumentation und Information. http://www.dimdi.de,<br />

zugegriffen am 24.04.2006.2006.<br />

Dolin, R.H.; Alschuler, L.; Beebe, C.; Biron, P.V.; Boyer, S.L.; Essin, D.; Kimber, E.; Lincoln, T.; Mattison,<br />

J.E. (2001): The HL7 Clinical Document Architecture. In: Journal of the American Medical Informatics<br />

Association, Vol. 8 (2001) Nr. 6, S. 552-569.<br />

Dolin, R.H.; Alschuler, L.; Boyer, S.; BEEBE, C.; BEHLEN, F.M.; BIRON, P.V.; SHABO, A. (2006): HL7<br />

Clinical Document Architecture, Release 2. In: Journal of the American Medical Informatics Association,<br />

Vol. 13 (2006), S. 30–39.<br />

EDIFACT (2006): United Nations Economic Comission for Europe. http://www.unece.org/, zugegriffen am<br />

Garguilo, J.J.; Martinez, S.; Rivello, R.; Cherkaoui, M. (2007): Moving Toward Semantic Interoperability of<br />

Medical Devices. Joint Workshop on High Confidence Medical Devices, Software, and Systems and<br />

Medical Device Plug-and-Play Interoperability.<br />

GEHR (2006): The Good European Health Record. http://www.chime.ucl.ac.uk/work-areas/ehrs/GEHR/, zugegriffen<br />

am 10.05.2006.2006.<br />

99


Gerdsen, F.; Müller, S.; Bader, E.; Poljak, M.; Jablonski, S.; Prokosch, H.-U. (2005): Einsatz von<br />

CDA/SCIPHOX zur standardisierten Kommunikation medizinischer Befunddaten zwischen einem<br />

Schlaganfall-/Glaukom-Screening-Programm und einer elektronischen Gesundheitsakte (EGA) . Paper<br />

presented at the Proceedings of Telemed 2005, Berlin, S. 124-137.<br />

Haller, A.; Oren, E.; Kotinurmi, P. (2006): m3po: An Ontology to Relate Choreographies to Workflow Models.<br />

Paper presented at the Services Computing, 2006. SCC '06. IEEE International Conference on, S.<br />

19-27.<br />

Haux, R. (2005): Health information systems - past, present, future. In: International Journal of Medical Informatics,<br />

(2005).<br />

Haux, R.; Seggewies, C.; Baldauf-Sobez, W.; Kullmann, P.; Reichert, H.; Luedecke, L.; Seibold, H. (2003):<br />

Soarian(TM) - workflow management applied for health care. In: Methods of Information in Medicine,<br />

Vol. 42 (2003) Nr. 1, S. 25-36.<br />

Health Level Seven Inc. Ann Arbor MI (2005): Health Level Seven Internetseite. http://www.hl7.org/, zugegriffen<br />

am 15.03.2005.<br />

Healthcare Information and Management Systems Society (2006): http://www.himss.org, zugegriffen am<br />

08.05.2006.<br />

Hornung, G.; Goetz, C.F.-J.; Goldschmidt, A.J.W. (2005): Die künftige Telematik-Rahmenarchitektur im<br />

Gesundheitswesen: Recht, Technologie, Infrastruktur und Ökonomie. In: <strong>Wirtschaftsinformatik</strong>, Vol.<br />

46 (2005) Nr. 3, S. 171-179.<br />

Hotchkiss, J.; Robbins, J.; Robkin, M. (2007): MD-Adapt - A Proposed Architecture for Open-Source Medical<br />

Device Interoperability. Joint Workshop on High Confidence Medical Devices, Software, and Systems<br />

and Medical Device Plug-and-Play Interoperability.<br />

Institute of Medicine of the National Academies (2001): Crossing the Quality Chasm: The IOM Health Care<br />

Quality Initiative, National Academic Press, Washington D.C. 2001.<br />

Jammes, F.; Mensch, A.; Smit, H. (2007): Service-Oriented Device Communications Using the Devices Profile<br />

for Web services. Paper presented at the 21st International Conference on Advanced Information Networking<br />

and Applications Workshops (AINAW 2007), Niagara Falls, ON, Canada, S. 947-955.<br />

Jammes, F.; Smit, H. (2005): Service-Oriented Architectures for Devices - the SIRENA View. Paper presented<br />

at the 3rd IEEE International Conference on Industrial Informatics (INDIN '05), S. 140-147.<br />

Jammes, F.; Smit, H.; Lastra, J.L.M.; Delamer, I.M. (2005): Orchestration of service-oriented manufacturing<br />

processes. Paper presented at the 10th IEEE Conference on Emerging Technologies and Factory Automation<br />

(ETFA 2005), S. 617-624.<br />

Karnouskos, S.; Baecker, O.; de Souza, L.M.S.; Spiess, P. (2007): Integration of SOA-ready Networked Embedded<br />

Devices in Enterprise Systems via a Cross-Layered Web Service Infrastructure. Paper presented<br />

at the 12th IEEE Conference on Emerging Technologies and Factory Automation (ETFA 2007), Patra,<br />

Greece.<br />

KBV (Kassenärztliche Bundesvereinigung Deutschland) (2005): xDT - Synonym <strong>für</strong> elektronischen Datenaustausch<br />

in der Arztpraxis. http://www.kbv.de/ita/4274.html, zugegriffen am 15.12.2005.<br />

Kuhn, K.A.; Wurst, S.H.; Bott, O.J.; Giuse, D.A. (2006): Expanding the scope of health information systems.<br />

Challenges and developments. In: Methods Inf Med, Vol. 45 (2006) Nr. 1, S. 43-52.<br />

Lechleitner, G.; Pfeiffer, K.P.; Wilhelmy, I.; Ball, M. (2003): Cerner Millennium: the Innsbruck experience.<br />

In: Methods of Information in Medicine, Vol. 42 (2003) Nr. 1, S. 8-15.<br />

Lesh, K.; Weininger, S.; Goldman, J.M.; Wilson, B.; Himes, G. (2007): Medical Device Interoperability –<br />

Assessing the Environment. Joint Workshop on High Confidence Medical Devices, Software, and Systems<br />

and Medical Device Plug-and-Play Interoperability.<br />

Lin, F.-R.; Chou, S.-C.; Pan, S.-M.; Chen, Y.-M. (2000): Mining time dependency patterns in clinical pathways.<br />

Paper presented at the System Sciences, 2000. Proceedings of the 33rd Annual Hawaii International<br />

Conference on, S. 8 pp. vol.1-8 pp. vol.1.<br />

Lin, F.-R.; Hsieh, L.-s.; Pan, S.-m. (2005): Learning Clinical Pathway Patterns by Hidden Markov Model.<br />

Paper presented at the System Sciences, 2005. HICSS '05. Proceedings of the 38th Annual Hawaii International<br />

Conference on, S. 142a-142a.<br />

Logical Observation Identifiers Names and Codes (LOINC®) (2006): Logical Observation Identifiers Names<br />

and Codes (LOINC®). http://www.regenstrief.org/loinc/, zugegriffen am 24.4.2006.2006.<br />

Merriam-Webster (2002): Merriam-Webster's Medical Dictionary.<br />

http://dictionary.reference.com/browse/communication, zugegriffen am 23.06.2008.<br />

Murray, M. (2003): Strategies for the successful implementation of workflow systems within healthcare: a cross<br />

case comparison. Paper presented at the System Sciences, 2003. Proceedings of the 36th Annual Hawaii<br />

International Conference on, S. 10 pp.-10 pp.<br />

100


NEMA (2005): DICOM-Webseite. http://medical.nema.org/, zugegriffen am 15.12.2005.<br />

openEHR (2004): OpenEHR-Webseite. http://www.openehr.org/, zugegriffen am 15.12.2005.<br />

Reinhardt, E.R. (2007): Workflow solutions with healthcare IT. In: Yearbook of Medical Informatics, (2007),<br />

S. XI-XIII-XI-XIII.<br />

Schwarze, J.-C.; Teßmann, S.; Sassenberg, C.; Müller, M.; Prokosch, H.-U.; Ückert, F. (2005): Eine modulare<br />

Elektronische Gesundheitsakte als Plattform einer verteilten Entwicklung. Paper presented at the<br />

50. Jahrestagung der Deutschen Gesellschaft <strong>für</strong> Medizinische Informatik, Biometrie und Epidemiologie<br />

(gmds), 12. Jahrestagung der Deutschen Arbeitsgemeinschaft <strong>für</strong> Epidemiologie (dae), Freiburg im<br />

Breisgau, Deutschland, S. 114-116.<br />

Shafiq, B.; Samuel, A.; Bertino, E.; Ghafoor, A. (2006): Technique for Optimal Adaptation of Time-<br />

Dependent Workflows with Security Constraints. Paper presented at the Data Engineering, 2006. ICDE<br />

'06. Proceedings of the 22nd International Conference on, S. 119-119.<br />

SNOMED International (2005): SNOMED International Web-Seite. http://www.snomed.org/index.html, zugegriffen<br />

am 15.12.2005.<br />

Standardized Communication of Information Systems in Physician Offices and Hospitals using XML<br />

(SCIPHOX) (2006): Arbeitsgemeinschaft SCIPHOX GbR mbH c/o DIMDI. In, (2006).<br />

Strähle, M.; Ehlbeck, M.; Prapavat, V.; Kück, K.; Franz, F.; Meyer, J.-U. (2007): Towards a Service-<br />

Oriented Architecture for Interconnecting Medical Devices and Applications. Joint Workshop on High<br />

Confidence Medical Devices, Software, and Systems and Medical Device Plug-and-Play Interoperability.<br />

Sunyaev, A.; Leimeister, J.M.; Schweiger, A.; Krcmar, H. (2008): IT-Standards and Standardization Approaches<br />

in Healthcare. In: Encyclopedia of Healthcare Information Systems. Hrsg.: Wickramasinghe,<br />

N.; Geisler, E. Idea Group Reference, 2008.<br />

Sutherland, J.; van den Heuvel, W. (2006): Towards an Intelligent Hospital Environment: Adaptive Workflow<br />

in the OR of the Future. Paper presented at the System Sciences, 2006. HICSS '06. Proceedings of the<br />

39th Annual Hawaii International Conference on, S. 100b-100b.<br />

The openEHR foundation (2006): Introducing openEHR. www.openehr.org, zugegriffen am<br />

Ückert, F.; Görz, M.; Ataian, M.; Prokosch, H.-U. (2002): akteonline - an electronic healthcare record as a<br />

medium for information and communication. Paper presented at the Medical Informatics Europe, Budapest,<br />

S. 293-297.<br />

UMLS (2006): United States National Library of Medicine. http://www.nlm.nih.gov/research/umls/, zugegriffen<br />

am 24.04.2006.2006.<br />

Wang, J.; Tepfenhart, W.; Rosca, D.; Tsai, A. (2007): Resource-Constrained Workflow Modeling. Paper<br />

presented at the Theoretical Aspects of Software Engineering, 2007. TASE '07. First Joint IEEE/IFIP<br />

Symposium on, S. 171-177.<br />

WHO (2006): World Health Organization. http://www.who.int, zugegriffen am<br />

Xing, J.; Zhishu, L.; Cheng, Y.; Yin, F.; Baolin, L.; Chen, L. (2007): Mining Process Models from Event Logs<br />

in Distributed Bioinformatics Workflows. Paper presented at the Data, Privacy, and E-Commerce,<br />

2007. ISDPE 2007. The First International Symposium on, S. 8-12.<br />

Zitzmann, R.; Schumann, T. (2007): Interoperable Medical Devices Due to Standardized CANopen Interfaces.<br />

Joint Workshop on High Confidence Medical Devices, Software, and Systems and Medical Device<br />

Plug-and-Play Interoperability.<br />

zur Muehlen, M.; Becker, J. (1999): Workflow Management and Object-Orientation - A Matter of Perspectives<br />

or Why Perspectives Matter. Paper presented at the Proceedings of the 2nd OOPSLA Workshop on the<br />

Design and Application of Object-Oriented Workflow Management Systems.<br />

101


Bisher erschienene Arbeitspapiere und <strong>Studien</strong> des <strong>Lehrstuhl</strong>s <strong>für</strong><br />

<strong>Wirtschaftsinformatik</strong> der Technischen Universität München<br />

Stand: Juli 2008<br />

Arbeitspapiernr.<br />

33 Sunyaev, A.; von Beck, J.; Jedamzik, S.; Krcmar, H.: IT-<br />

Sicherheitsrichtlinien <strong>für</strong> eine sichere Arztpraxis. Arbeitspapier Nr. 33, Garching<br />

2008<br />

32 Huber, M.; Sunyaev, A.; Krcmar, H.: Technische Sicherheitsanalyse der<br />

elektronischen Gesundheitskarte. Arbeitspapier Nr. 32, Garching 2008<br />

31 Schwertsik, A.; Rudolph, S.; Krcmar, H.: Empirische Untersuchung zur Ist-<br />

Situation der Planung und Steuerung der IT in großen mittelständischen<br />

Unternehmen.. Arbeitspapier Nr. 31, Garching 2007<br />

30 Baume, M.; Schulze, E.; Kettler, S.; Hummel, L.; Lucas, Y.; Thalhammer,<br />

V.; Rathmayer, S.; Krcmar, H.: Nutzung von eLearning-Komponenten in<br />

CLIX - Evaluation einer Lehrveranstaltung im Rahmen des Projektes „elecTUM“<br />

an der TU München. Arbeitspapier Nr. 30, Garching 2007<br />

29 Baume, M.; Schulze, E.; Nuber, C.; Rathmayer, S.; Krcmar, H.:<br />

Plattformunterstützte Prozessabläufe in der Lehre der TU München.<br />

Arbeitspapier Nr. 29, Garching 2007<br />

28 Ortmann, M., Böhme, M., Schweiger, A., Krcmar, H. (2007): Ermittlung von<br />

Verbesserungspotenzialen und Entwurf von Lösungskonzepten zur flexiblen und<br />

dynamischen Unterstützung der Prozesse in der Klinik <strong>für</strong> Anaesthesiologie am<br />

Klinikum rechts der Isar der Technischen Universität München durch gezielten<br />

IT-Einsatz. Arbeitspapier Nr. 28, Garching 2007.<br />

27 Bögelsack, A., Wittges, H., Krcmar, H. (2007): White Paper: Techniques for<br />

SAP hosting under Solaris 10. Arbeitspapier Nr. 27, Garching 2007.<br />

26 Hörmann C., Klapdor, S., Leimeister, J.M., Krcmar, H. (2006):IT<br />

Management in deutschen Krankenhäusern – Eine Bestandsaufnahme und die<br />

Sicht der IT-Entscheider. Eine empirische Untersuchung zur Identifikation von<br />

Bedürfnissen, Trends und Prioritäten im IT Management deutscher<br />

Krankenhäuser. Arbeitspapier Nr. 26, Garching 2006.<br />

25 Wolf, P.; Wolf, M.; Krcmar, H. (2006): Multilinguale Unternehmensservices in<br />

Stuttgart (MUSS). Arbeitspapier Nr. 25, Garching 2006.<br />

24 Walter, S.; Heck, R.; Krcmar, H. (2006): Problem Management in der IT<br />

Infrastructure (ITIL) – Hinweise <strong>für</strong> die erfolgreiche Umsetzung. Arbeitspapier<br />

Nr. 24, Garching 2006.<br />

23 Böhmann, T; Taurel, W.; Krcmar, H. (2006): Produktmanagement <strong>für</strong> IT-<br />

Dienstleistungen in Deutschland. Arbeitspapier Nr. 23, Garching 2006.<br />

22 Müller, A.; Leimer, S.; Baume, M. (2006): Evaluationskonzept <strong>für</strong> die<br />

Begleitforschung des Projektes „elecTUM“ an der TU München. Arbeitspapier<br />

22, Garching 2006.<br />

21 Böhmann, T.; Schermann; M.; Krcmar, H. (2006): Reference Model<br />

Evaluation: Towards an Application-Oriented Approach. Arbeitspapier Nr. 21,<br />

Garching 2006.<br />

20 Walter, S.; Leimeister, J.M.; Krcmar, H. (2006): Chancen und<br />

Herausforderungen digitaler Wertschöpfungsnetze im After-Sales-Service-<br />

Bereich der deutschen Automobilbranche. Arbeitspapier Nr. 20, Garching 2006.<br />

19 Hauke, R.; Baume, M.; Krcmar, H. (2005): Kategorisierung von Planspielen –


Entwicklung eines übergreifenden Strukturschemas zur Einordnung und<br />

Abgrenzung von Planspielen, Arbeitspapier Nr. 19, Garching 2005.<br />

18 Taranovych, Y.; Rudolph, S.; Förster, C. (2005): Standards <strong>für</strong> die<br />

Entwicklungsprozesse digitaler Produktionen. Arbeitspapier Nr. 18, Garching<br />

2005.<br />

17 Voelkel, D; Taranovych, Y; Rudolph, S; Krcmar H. (2005): Coach-<br />

Bewertung zur Unterstützung der Coach-Suche im webbasierten Coaching<br />

digitaler Produktionen. Arbeitspapier Nr. 17, Garching 2005.<br />

16 Mohr, M.; Krcmar, H. (2005): Bildungscontrolling: State of the Art und<br />

Bedeutung <strong>für</strong> die IT-Qualifizierung, Arbeitspapier Nr. 16, Garching 2005.<br />

15 Jehle, H.; Krcmar, H. (2005): Handbuch: Installation von SAP R/3 Enterprise,<br />

ECC 5.0 und BW 3.5 IDES auf SunFire v40z. Arbeitspapier Nr. 15, Garching<br />

2005.<br />

14 Kutschke, C; Taranovych, Y; Rudolph, S; Krcmar H. (2005): Web<br />

Conferencing als Erfolgsfaktor <strong>für</strong> webbasiertes Projekt-Coaching. Arbeitspapier<br />

Nr. 14, Garching 2005.<br />

13 Baume, M.; Hummel, S.; Krcmar, H. (2004): Abschlussbericht der Evaluation<br />

im Projekt Webtrain. Arbeitspapier Nr. 13, Garching 2004.<br />

12 Baume, M.; Krcmar, H. (2004): Beurteilung des Online-Moduls<br />

„Informationsmanagement“ aus kognitionswissenschaftlicher, lerntheoretischer<br />

und mediengestalterischer Sicht. Arbeitspapier Nr. 12, Garching 2004.<br />

11 Leimeister, J.M. (2005): Mobile Sportlerakte. Arbeitspapier Nr. 11, Garching<br />

2005.<br />

10 Mohr, M.; Wittges, H.; Krcmar, H. (2005): Bildungsbedarf in der SAP-Lehre:<br />

Ergebnisse einer Dozentenbefragung. Arbeitspapier Nr. 10, Garching 2005.<br />

9 Hummel, S.; Baume, M.; Krcmar, H. (2004): Nutzung von Teamspace im<br />

Projekt WebTrain. Arbeitspapier Nr. 9, Garching 2004.<br />

8 Hummel, S.; Luick, S.; Baume, M.; Krcmar, H. (2004): Didaktisches Konzept<br />

<strong>für</strong> das Projekt WebTrain. Arbeitspapier Nr. 8, Garching 2004.<br />

7 Wolf, P.; Krcmar, H. (2003): Wirtschaftlichkeit von elektronischen<br />

Bürgerservices – Eine Bestandsaufnahme 2002. Arbeitspapier Nr. 7, Garching<br />

2003.<br />

6 Esch, S.; Mauro, C.; Weyde, F.; Leimeister, J.M.; Krcmar, H.; Sedlak, R.;<br />

Stockklausner, C.; Kulozik, A. (2005): Design und Test eines mobilen<br />

Assistenzsystems <strong>für</strong> krebskranke Jugendliche.Arbeitspapier Nr. 6 Garching<br />

2005.<br />

5 Schweizer, K.; Leimeister, J.M.; Krcmar, H. (2004): Eine Exploration<br />

virtueller sozialer Beziehungen von Krebspatienten. Arbeitspapier Nr. 5,<br />

Garching 2004.<br />

4 Knebel, U.; Leimeister, J.M.; Krcmar, H. (2004): Empirische Ergebnisse eines<br />

Feldversuchs: Mobile Endgeräte <strong>für</strong> krebskranke Jugendliche. Arbeitspapier Nr.<br />

4, Garching 2004.<br />

3 Rudolph, S.; Krcmar, H. (2004): Zum Stand digitaler Produktionen.<br />

Arbeitspapier Nr. 3, Garching 2004.<br />

2 Mohr, M.; Hoffmann, A.; Krcmar, H. (2003): Umfrage zum Einsatz des SAP<br />

Business Information Warehouse in der Lehre. Arbeitspapier Nr. 2, Garching<br />

2003.<br />

1 Daum, M.; Krcmar, H. (2003): Webbasierte Informations- und<br />

Interaktionsangebote <strong>für</strong> Krebspatienten 2002 – Ein Überblick. Arbeitspapier Nr.<br />

1, Garching 2003.


<strong>Studien</strong>-<br />

nr.<br />

13 Krcmar, H.; Böhmann, T.; Leimeister, J.M.; Wolf, P.; Wittges, H. (2008):<br />

2 nd Workshop on Information Systems and Services Sciences 2008 (2 nd WISSS<br />

08)<br />

12 Leimeister, S.; Krcmar, H. (2008): CIO-Trends 2008 – Ergebnisse einer<br />

Befragung unter deutschen IT-Führungskräften, Garching 2008.<br />

11 Krcmar, H.; Böhmann, T.; Leimeister, J. M.; Wolf, P.; Wittges, H. (2008):<br />

Workshop on Information Systems and Services Sciences 2008 (WISSS 08)<br />

10 Mohr, M.; Schubert, U.; Wittges, H.; Krcmar, H.; Schrader, H. (2008):<br />

Enterprise Software Training: Current Status and Trends - Results of the 4th<br />

UCC Training Needs Analysis 2007<br />

9 Mohr, M.; Schubert, U.; Wittges, H.; Krcmar, H.; Schrader, H. (2007):<br />

Unternehmenssoftware-Ausbildung: Aktueller Stand und Trends - Ergebnisse der<br />

4. UCC-Bildungsbedarfanalyse 2007<br />

8 Mohr, M.; Ebner, W.; Wittges, H.; Krcmar, H.; Schrader, H. (2007):<br />

Unternehmenssoftware-Ausbildung an deutschen Hochschulen und Schulen:<br />

Ergebnisse der 3. UCC-Bildungsbedarfanalyse 2007<br />

7 Mohr, M.; Krcmar, H.; Hoffmann, A. (2006): Studentenorientiertes<br />

Anwender-Einführungstraining <strong>für</strong> integrierte Unternehmenssoftware:<br />

Konzeption und Kursdesign am Beispiel mySAP ERP. Studie Nr. 7, Garching<br />

2006.<br />

6 Böhmann, T.; Taurel, W.; Dany, F.; Krcmar, H. (2006): Paketierung von IT-<br />

Dienstleistungen: Chancen, Erfolgsfaktoren, Umsetzungsformen:<br />

Zusammenfassung einer Expertenbefragung. Studie Nr. 6, Garching 2006.<br />

5 Hauke, R.; Baume, M.; Krcmar H. (2006): Computerunterstützte<br />

Management-Planspiele: Ergebnisse einer Untersuchung des Planspieleinsatzes<br />

in Unternehmen und Bildungseinrichtungen. Studie Nr. 5, Garching 2006.<br />

4 Rudolph, S.; Taranovych, Y.; Pracht, B.; Förster, C.; Walter, S.; Krcmar,<br />

H. (2004): Erfolgskriterien im Projektmanagement digitaler Produktionen.<br />

Studie Nr. 4, Garching 2004.<br />

3 Wittges, H.; Mohr, M.; Krcmar, H.; Klosterberg, M. (2005): SAP-Strategie<br />

2005. Die Sicht der IT-Entscheider. Studie Nr. 3, Garching 2005.<br />

2 Hummel, S.; Baume, M.; Krcmar, H. (2004): Abschlussbericht des Projekts<br />

WebTrain - Teilprojekt der TUM. Studie Nr. 2, Garching 2004.<br />

1 Junginger, M.; Krcmar, H. (2004): Wahrnehmung und Steuerung von Risiken<br />

im Informationsmanagement - Eine Befragung deutscher IT-Führungskräfte.<br />

Studie Nr. 1, Garching 2004

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!