European Scientific Journal, Jul 2014

I-MEET is an Integrated Model for Evaluating E-government services Transformation from stakeholders' perspectives. It is based on an integration of concepts from value chain management and business process transformation to optimize the system-wide value chain of providers and users simultaneously. It aims to align stakeholders on a common global value against traditional disintegrated approaches where each stakeholder optimizes its e-service local value at the expense of others. The measured variables are derived from the literature and focused groups. They are then categorized into cost and risk (Inputs) and (benefit and opportunity) Outputs after a validation process based on Structured Equation Models using a sample of 1540 user-responses of e-services in the UK. Finally, Data Envelopment Analysis is conducted to derive an aggregated of an e-service satisfaction value using the various inputs and outputs. The empirical results demonstrate that data-derived weights for aggregating indicators are variable rather than fixed across e-services. The novelty of the assessment approach lies in its capability to provide informed suggestions to set targets to improve an eservice from the perspective of all engaging users. Hence it provides a better transformation of public administration services and improved take up by citizens and businesses.

A PDF file should load here. If you do not see its contents the file may be temporarily unavailable at the journal website or you do not have a PDF plug-in installed and enabled in your browser.

Alternatively, you can download the file locally and open with any standalone PDF reader:



European Scientific Journal June 2014 /SPECIAL/ edition vol.1 ISSN: 1857 - 7881 (Print) e I-MEET FRAMEWORK FOR THE EVALUATION E- GOVERNMENT SERVICES FROM ENGAGING STAKEHOLDERS' PERSPECTIVES Ibrahim H. Osman 0 Associate Dean for Research 0 Abdel Latef Anouze 0 Assistant Professor 0 0 American University of Beirut, Olayan School of Business , Lebanon, Nitham Mohammed Hindi , Dean , PhD College of Business and Economics, Qatar University , Doha , Qatar Zahir Irani , Dean, PhD Habin Lee, Reader, PhD Vishanth Weerakkody, Reader , PhD Brunel Business School, Brunel University , UK I-MEET is an Integrated Model for Evaluating E-government services Transformation from stakeholders' perspectives. It is based on an integration of concepts from value chain management and business process transformation to optimize the system-wide value chain of providers and users simultaneously. It aims to align stakeholders on a common global value against traditional disintegrated approaches where each stakeholder optimizes its e-service local value at the expense of others. The measured variables are derived from the literature and focused groups. They are then categorized into cost and risk (Inputs) and (benefit and opportunity) Outputs after a validation process based on Structured Equation Models using a sample of 1540 user-responses of e-services in the UK. Finally, Data Envelopment Analysis is conducted to derive an aggregated of an e-service satisfaction value using the various inputs and outputs. The empirical results demonstrate that data-derived weights for aggregating indicators are variable rather than fixed across e-services. The novelty of the assessment approach lies in its capability to provide informed suggestions to set targets to improve an eservice from the perspective of all engaging users. Hence it provides a better transformation of public administration services and improved take up by citizens and businesses. Data Envelopment Analysis; Electronic Government Service; Performance Measurement; Stakeholders; Structured Equation Modelling; United Kingdom; Qatar; Lebanon - Introduction E-government is defined as: utilizing the internet and the world-wide-web for delivering government information and services to citizens (United Nations, 2001). An egovernment service (e-service) involves many stakeholders such as citizen, non-citizen and business users; government employees; information technology developers; and government policy makers, public administrators and politicians (Rowley, 2011). E-government is also a complex dynamic socio-technical system encompassing several issues starting from governance; policy development, societal trends; information management; interaction; technological change; to human factors (Dawes, 2009). Consequently, the evaluation of such e-services becomes a challenging task due to several factors related to e-government information and communication system (e-system) as well as stakeholders. Each stakeholder has different interests, costs, benefits and objectives that impact users? satisfaction and eservice take-up. The achievement of one set of specific e-government objectives for one stakeholder may result in the non-achievement of another set of specific e-government objectives for another stakeholder (Millard, 2008). This challenge in balancing the interests of various stakeholders and interest groups was also emphasized by Kelly and Nielsen (2011) with a highlight on the lack of user-centricity which has been recently recognized by some government officials such as the Swedish Minister Ann-Karin Halt who said ?...agencies have a good internet presence, but their internet solutions are often designed to meet the agency?s need rather than the citizens? needs?. Recently, Lee et al. (2008) reported that e-government has been delivered at a high cost for the tax payer with many successes and failures and a little use by citizens. Their statement can be supported by Eurostat (2009) reports showed that the information technology expenditures in 2008 for the United Kingdom, 27 European States, United States and Turkey are 3.7%, 2.4%, 3.3%, and 0.9% of their national Gross Domestic Product, respectively. Moreover, the e-government take-up (use) by individuals aged between16 and 74 in the United Kingdom, 27 European States, and Turkey, are 30%, 35% and 8% of their population respectively. Lee et al (2008) also listed other hindering factors; the large bureaucratic public sector structures which are grounded in years of tradition, thus unable e-government: to embrace change; create environment for innovation; establish tools to measure users? satisfaction and identify best benchmarks to improve performance. Moreover, Millard et al, (2006) highlighted the lack of a proper measurement strategy for objectives. They suggested that operational output objectives related to the roll-out of e-government services need to be evaluated and measured in relations to specific outcome objectives to increase user satisfaction and e-service take-up; thus stipulating that high quality e-services would increase users? satisfaction and take-up, decrease administrative burden, and increase back-office efficiency. Additionally, Irani, et al. (2005) emphasized the potential of long term savings and improved service quality levels that can be achieved by the development of an efficient egovernment infrastructure to facilitate electronic delivery of services to citizens. However, this potential requires e-government to focus on: innovation and structural reform; rethinking the way in which e-services are done; simplifying and reengineering the organizational process in order to achieve high quality user-centric e-services. In e-government practice, the evaluation of e-services is never simple due to the tremendous complexity in public performance measurement, availability of information on egovernment policy and administrative efficiency indicators. According to the review of customer satisfaction approaches in FreshMinds (2006), traditional performance measurement of government services are often based on modification of customer satisfaction indices (such as ACSI: American customer satisfaction index, or EPSI: European customer satisfaction index), standardized survey instruments (such as CMT: Canadian common measurement tool); and scale conversion methodologies (Miller and Miller, 1991). All these measurement approaches conduct surveys and operate at a similar level of depth in terms of asked questions, but they do differ in terms of breadth and coverage. They use fixed weights for each measured variable associated with each factor to devise an overall satisfaction score. In our view, there are few main points that may go against the appropriateness of such practical approaches. First e-service users are not customers; they cannot buy better quality e-services at higher prices due to the non-existence of market competition in e-government. Second, customer satisfaction indices are measured based on perceived and expected quality of services. Alternatively, users? satisfaction should be a function of the quality of online interactions, reliability, personalization and other opportunities that come out of an e-service. Finally, the perception of high risk when using e-commerce service might be more than that with e-service. As a result, there have been a few research initiatives to develop a citizen satisfaction model (CSM) for e-services, (Kim et al., 2005; Welch et al., 2005; Lee at al. 2008; Wang et al. 2005). These models focus on e-government measures for different purposes, perspectives and countries (Jaeger and Bertot, 2010). They also employ statistical approaches to establish relationships and predict satisfaction trends (Chan et al., 2010; Irani et al., 2007, 2008; Wang and Liao, 2008; and Weerakkody and Dhillon, 2008). They may not suggest a systematic process to e-service managers to design better services. They are descriptive rather than prescriptive approaches in nature. For a recent review on an analysis of methodologies utilized in e-government research from a user satisfaction perspective with eservices, we refer to Irani et al (2012). Given the above diversity of e-government measurement models and mentioned challenges, there has been no formal agreement on a common international framework for evaluation; there is no single view of how such measurement indicators should be designed, or maintained relevant and practical over time. The integration of citizen? use of e-services is absent from most measurement frameworks (United Nations, 2010). Hence, an Integrated Model for Evaluating E-government services Transformation ? IMEET project was initiated with the support of Qatar National Research Fund (QNRF) to develop a global agreement on a consistent framework to measure e-government services and to include measures on all stakeholders namely, users and providers. In this paper, we aim to develop a standard for the evaluation of an e-service based on both e-system characteristics and user?s behaviour from users? online experience to measure users? satisfaction using a data envelopment analysis. The reasons to measure each stakeholder value within I_MEET framework are mainly due to conflict of interests, need to align various stakeholders on common goals and recommend improvements at macro and micro levels from different perspectives, Osman et al (2013). Please note that citizen/users are used interchangeably in the paper. The main objectives of the paper are as follows: To develop an alternative satisfaction measure using data envelopment analysis (DEA) efficient frontier methodology. DEA considers simultaneously the multiple measures on outputs (benefits, outcome, and personal opportunity factors) generated from the e-system with the multiple measures on inputs risk and cost to users in order to determine the aggregate measure on satisfaction. Thus the satisfaction measure would reflect an overall efficiency and effectiveness of the e-service. To experimentally validate that the relationships among Cost-Risk inputs, BenefitOpportunity outputs and users? satisfaction are statistically significant using real-data collected on users of five e-services in the UK with a new enhanced questionnaire, see Appendix. The experiment would provide an additional proof of the relationships validity of the COBRA (Cost-Risk and Benefit-Opportunity Analysis) framework that was initially proposed and validated on a sample of Turkish data in (Osman et al 2011, 2014). To illustrate how DEA results can generate recommendations for managers to redesign and improve e-services from the citizen?s perspective. To call for the re-assessment of current United Nations e-government indices that use fixed weights for indicators to derive weights based on our findings that users? stakeholder prefer to have variable weights reflecting their interests. Methods In this section, we shall first illustrate an e-service and the engaging stakeholders; the identification process of the set of inputs and outputs with special focus on users, the data collection process, the statistical validation process, and the data envelopment analysis. In this paper, our methodology is developed from the engaging users? perspective. However the IMEET framework is developed from the perspective of all stakeholders. Stakeholders? groups include users/citizens; businesses; public administrators (employees and politicians); Government agencies; E-government project managers; design and IT developers; suppliers and IT developers; research and evaluators. Rowley (2011). I-MEET is a mission-driven interconnected framework based on the five main components that are shown in Figure 1: Mission and desired values of Government; the involved internal stakeholders; the affected External stakeholders; the offered E-service(s); the Operating resources support and the decision making support system (DMSS). The DMSS performs the business intelligence analysis to determine the desired e-service values. It is also the dashboard for the deceleration and acceleration control process to guide the improvement of e-services in various evaluation dimensions. The I-MEET evaluation process starts by the identification of the e-service to access, and the engaged stakeholders group which provides the real-experience data on the e-service to evaluate. In this paper, we consider the users group which had real interactions and experience with the identified e-service to evaluate. An e-service is delivered using an esystem which is considered a black-box process to users (external stakeholders). The blackbox process is the concern of governments and agency providers (internal stakeholders). It is normally designed according to providers? strategic initiatives, objectives and desired public values. The providers inject various input resources to provide outputs and outcomes to the all stakeholders including users. However, the users provide inputs to an e-system during online interactions to receive e-system?s outputs and outcomes. This interaction process during the actual engagement with an e-service is a white-box process to users. The inputs and outputs of the white-box process are the main concern of users that influence the users? satisfaction. Figure 2 illustrates the interaction process between a user and an e-system to obtain an eservice. Thus, an e-service can be defined as the complete cycle of stages starting from the first interaction to request a service through the various input/output online activities while engaging with an e-system to the final delivery of the service according to the user?s desired output and outcome. The identification process of the set of inputs and outputs from user?s perspective was based on a systematic approach where a set of measurable indicators was derived from conducting focused groups with various stakeholders and the available literature, Irani et al (2012). Three workshops were held in Qatar, Lebanon and UK with different stakeholders users, providers and academics- to generate a questionnaire from users? prospective consisting of 60 questions. The data collection process started by identifying a list of five e-services, namely, Benefits, Retirement and Financial or Job Seekers support; Driving License Queries; Healthcare Information; Local Government and Tax Information. The data collection was conducted by a private agency over a six months period. The statistical validation process was conducted to validate the set of input and output variables using COBRA - the costopportunity and benefit-risk analysis- framework in Figure 3. The COBRA framework was proposed to validate the measurement scale of a set of measured variables and their relationships to users? satisfaction on a sample of Turkish e-services, Osman et al (2011, 2014). The COBRA validation process was based on a structured equation modeling and a confirmatory factor analysis in order to group measured variables into a set of fewer COBRA categories. The prediction of users? satisfaction to users? inputs as predicators was found to follow the following significant relationship: Satisfaction = 1.9 + 0.385? Opportunity + 0.026? Benefit - 0.023? Risk - 0.287 ? Cost Finally, the main analytical component of the I-MEET is a decision making support component which is based on data envelopment analysis. It acts like a dash broad that will provide tradeoffs among competing indicators and provide guidance on how to accelerate and decelerate the I-MEET processes in order to achieve the main goals from the evaluation process for transforming an e-service. Data Envelopment Analysis (DEA) is a non-parametric linear programming approach for multifactor productivity performance analysis. It evaluates the relative efficiencies of a homogeneous set of decision-making units (DMUs) where each DMU (e-service) utilizes multiple inputs and resources (cost and risk variables) to produce multiple outputs and outcomes (benefit and opportunity variables). The efficiency score of a unit is measured by an aggregate function defined as the ratio of the total weighted outputs to the total weighed inputs. A unit with an aggregate efficiency score of 1 (slack values =0) is considered to be efficient (satisfying users) and a score of less than 1 indicates that the e-service unit is inefficient (dissatisfying users). The original DEA constant return to scale model (DEACRS) was developed by Charnes et al. (1978). It assumes that a proportional change in inputs does result in a similar proportional change in outputs. The DEA-CRS model needs to be executed as many times as the number of decision making units in order to determine an aggregate efficiency score for each e-service. The weights for each e-service are optimized in the best interest of the e-service being evaluated subject to the aggregate ratio of each eservice in the set does not exceed a value of 1. Figure 4 provides a mathematical formulation for the primal DEA output-oriented model based on a constant return to scale on the left side and its associated envelopment dual model on the right. In this formulation, given n e-services where p (p =1, ?, n) is the e-service being evaluated, m represents the number of inputs (cost and risk variables) and s represents the number of outputs (benefit and opportunity variables), is the amount of output k generated by e-service i, and is the amount of input required by e-service i, and , are the weights given to output k and input j respectively. The output-oriented productivity measure of e-service p can be obtained by maximizing the numerator of (? =1 /? =1 ) and setting its denominator equals to 1 as shown in the first constraint in the formulation. The second set of n constraints achieves the relative concept; obtained by imposing no aggregate ratio value to any unit should exceed one. Similarly, a primal input-oriented model can be obtained by minimizing the dominator while setting the numerator equals to 1. Primal DEA-CSR: Output-Oriented Model DEA-CSR: Input-Oriented ? ? ; ? = 1, ? , ; ? = 1, ? , = 1 ? 0 ; ? = 1, ? , For every inefficient unit, DEA identifies a set of efficient units that can be utilized as benchmarks for improving inefficient ones. Benchmarks can be easily obtained by employing the envelopment DEA-CRS input-oriented model when the number of DMUs is very high due to its computational efficiency. A DEA variable return to scale (DEA-VRS) model was developed by Banker et al. (1984). It assumes variable changes in outputs, unlike proportional changes in DEA-CRS. The envelopment DEA-VRS model can be obtained by adding a constraint ? =1 = 1 to the envelopment input-oriented model DEA-CRS model, where represents the dual variables to identify the benchmarks for inefficient units. DEA was considered as one of the big idea in the history of research in service operations (Chase and Apte, 2007). DEA applications in service operations include: examination of efficient use of different types of enterprise information in the realization of strategic performance (Bendoly et al, 2009); assessing the relative efficiency of local government in Portugal (Afonso and Fernades, 2008); efficient service location design for a government agency in the State of Michigan (Narasimhan et al, 2005); evaluation of efficiency of units within a large-scale network of petroleum distribution facilities (Ross and Droge, 2004); performance assessment of joint maintenance shops in the Taiwanese army (Sun, 2004); evaluation of the relative efficiency of nurses (Osman et al, 2010); For more details on DEA theory, models and applications please refer to Cooper at al. (2007). Results and discussions The users? online experience was captured from responses of 1540 UK real-time users of the five identified e-services. Enough time was allowed to collect more than 300 responses per e-service, see Table 1. Table 2 provides description of the data and their grouping. The set of 60 questions in the questionnaire were divided into two parts. Part one contained 49 questions related the users? e-service experience for measuring the users? value of satisfaction. These questions were further subdivided into a set of 4 factors and associated sub- categories to generate recommended improvements. The value of each variable was obtained by averaging the Likert scale responses of the included questions. Part two contained the other 11 questions to collect bio-data in order to identify the characteristics of satisfied/dissatisfied users for managerial actions. E-Service name Benefits, Retirement & Financial or Job Seekers support Driving License Queries Healthcare Information Local Government Tax Information All E-service To validate the proposed COBRA relationships (Figure 5) among the identified variables, factors and sub-categories and the users? satisfaction, a structured equation model was used to test the fitness of the proposed model, (McDonald and Ho, 2002). The results in Table 3 showed that all statistical measure indices were within the acceptable levels with p <0.01. For instance, the value of X2/df = 4.39; Root Mean Square Error of Approximation, RMSEA= 0.05; Normed Fit Index, NFI = 0.93; Comparative Fit Index, CFI = 0.96. As a result, the SEM results provide a proof that the COBRA proposed model has a satisfactory model fitness and that all of the measured variables can be used to measure the satisfaction of users from the corresponding factors/constructs. The COBRA model captures the rational behaviour of users - if the cost and risk are lowest and if the benefit and opportunity are highest, then the users would be the most satisfied. This rationality is translated into the DEA modelling process in the following way, if the inputs (cost and risk) are lowest and the outputs (benefits and opportunity) are highest, then the associated decision making unit has the highest DEA score of 1 (equivalent to most satisfied). Moreover, the COBRA validation was based on statistical tests that predict trends but they do not allow the identification of best-practice benchmark for improvements. Those best-practices are always treated as odd points and may be neglected/dropped from the statistical analysis. On the contrary, those odd points may represent the best-practices to guide the improvement process and DEA is more capable at their identifications. They form the set of efficient frontier in DEA terminology. Therefore, we are using the same indicators to generate improvement recommendations as well as DEA scores on users? satisfaction. In order to generate the satisfaction of users with an e-service, the appropriate DEA model must be selected based on the characteristics of the users and the orientation of desired improvements. Table 4 presents an analysis of the bio-data of the respondents. It can be seen that the users come from heterogeneous groups of different interest usage, annual income and computer skills. These features require the implementation of a data envelopment model with a variable return to scale. Further, since we are interested in measuring the efficiency of utilisation of inputs and the effectiveness of outputs by an e-service, input and output-oriented models must be utilised. Therefore the following two DEA-models are used to analyze the collected data, namely: input-oriented DEA-VRS model ? input-oriented DEA with Variable Return to Scale - and output-oriented DEA-VRS model. The DEA results reveal a number of observations. First, the efficiencies of transformation are different across e-services and orientation desired, Table 5. The input-oriented efficiencies of the e-services range from 63.9% to 66.8% with an average of 64.9%, i.e., the current outputs (opportunity, and benefit) can be produced at an average of efficiency of inputs (risk and cost) utilisation of 64.9% than the current level. This indicates more managerial actions are needed to reduce the current resource utilization by 35% to keep the level of outputs (the average of input orientation score is around 65%). In addition, it was found that 86 out 1540 (5.58%) of the respondents were fully satisfied or achieved DEA scores of 1. However, if a reduction of the utilisation is not possible (i.e. keeping the resource utilisation of inputs at the current level) then the management should look at increasing the current level of outputs by an average of 20% since the average of output-oriented efficiency is 80%). Similarly, was found that 211 out 1540 (13.77%) of the respondents were fully satisfied or achieved DEA scores of 1. From the analysis in Table 5, it can be seen that the Driving License e-service is the best among all compared e-services. Its best-practice and operating features can be further documented and analysed to use it as a guiding benchmark for the less efficient e-services. Second, the different weights given to each input/output variable are different for an eservice in the same country, Table 6. The differences reflect that different importance is assigned to measured variables from the users? perspective. They also vary per orientation and their values would provide management with a guiding tool to what matters to users. Percentage 1.62 11.56 35.06 51.75 16.36 23.18 27.6 14.68 7.4 3.96 2.21 3.12 0.84 0.65 12.53 18.05 12.21 22.6 16.75 17.86 This observation highlights a very important weakness in the equal-weight approach that has been used to generated UN e-government indices and invites more research to reassess the current ranking of countries, since it does not take the relative preference of countries when deriving the ranking scores. Finally, both DEA-VRS models generate target improvement expressed in terms of percentage change for a particular e-service or a group of e-services with reference to the set of best-practice frontier, i.e., fully satisfied users. For instance, Table 7 provides such recommended changes on the average for each of the five E-services. Negative values indicate a reduction in the current values of the associated indicators, while positive values indicate increases over the current values reach in order to become efficient or effective from the perspective of the respondents. From table 7, it can be seen that the financial risk, personal risk followed by the cost of time have the highest % of required improvements along with the improvement of the technical opportunity from the input-oriented model. However, looking at the recommended change from the output-oriented model, it can be seen that the financial risk and the cost of time and the technical opportunity must be improved. Both models agree on such recommendations with different degree of change. In this case, the management interested in promoting the provision of e-government service are invited to look at the characteristics of the benchmark and learn new ways to improve the e-service. The importance of the recommendations is coming from the actual observation of an e-service and a group of respondents, who achieved the suggested targets, i.e., we have a set of best practice efficient services that were identified to give the improvement or change process. Conclusion In this paper, a new framework for evaluating e-Government e-services from stakeholders? perspective was introduced. The framework is a mission driven approach with goals that are translated into strategies with objectives and initiatives with desired values. These initiatives would affect the input-resource efficiencies, quality of generated output/outcome effectiveness and business impact of the e-service provisions. The users are one of the key stakeholder and their opinions are often neglected but very important to increase take-up and providers objectives. Moreover, while using an e-service, the e-service may require users? inputs to generate outputs and outcomes that impact users? satisfaction in contrary to the desire of the providers. Therefore, developing a users? questionnaire and validating of prime important for capturing the users? values of e-services. The questionnaire was systematically developed using focused groups with users, providers and academics in Qatar, UK and Lebanon. The generated questionnaire is now validated using collected data from a large Sample of UK respondents in this paper. The validation process uses a structured equation modelling to provide a proof of the existence of significant relationships between cost-risk and benefit-opportunity on one hand and users? satisfaction on the other hand. The statistical testing provides the second validation of the COBRA framework in the literature. After the validation process, Data envelopment analysis was conducted to determine optimal weights for variables from the relative perspective of users. The results of DEA show that the UK e-services are more effective in terms of output generation and less efficient in terms of input utilization. Hence, the paper provides e-services? providers with a management tool that can identify targets for improvements for specific indicators for an e-service to become either input-efficient or output-effective. It also provides reference to existing best practices that can guide the change in the improvement process. The DEA analysis also showed that the use of fixed weights to aggregate indicators to produce United Nation indices may need reassessment or revisited. Simply, because, the weights of indicators seem to vary within a country and within the same users? group of e-services let alone use fix weights across nations. This study is the first of its kind for analysing e-services in the UK from the users? perspective using the proposed quantitative approach. The approach can evaluate a single eservice to establish best-practice among users or evaluate multiple e-services to establish bestpractice among e-services. The research team is currently conducting similar studies are currently being conducted to evaluate e-services in Qatar, Lebanon and Turkey from the users and providers perspectives. Future research can also benefit from studying the bio-data and written feedbacks and correlate them to the obtained data development scores using datamining tool or other descriptive statistics to identify the characteristics of satisfied and distained groups. The various analyses is limited to the use of Likert scale for the users? responses due to the difficulty for users to provide proper estimates for measured variables. But such limitation does not affect the proposed approach, but actual data may give better insights and understanding. References: Afonso, A., & Santos, M. (2008). A DEA approach to the relative efficiency of Portuguese public universities. Journal of Portuguese Management Studies,13(1), 67-87. Banker R, Charnes A, Cooper W. Some Models for estimating technical and scale inefficiencies in data envelopment analysis. Management Science 1984; 30(9): 1078-1092. Bendoly, E., Rosenzweig, E. D., & Stratman, J. K. (2009). The efficient use of enterprise information for strategic advantage: A data envelopment analysis.Journal of Operations Management, 27(4), 310-323. Breiman L, Friedman J, Olshen R, Stone C. Classification and regression trees, Pacific Grove, Wadsworth-Monterey, USA; 1984 Chan F, Thong J, Venkatesh V, Brown S, Hu P, Tam K.(2010). Modeling citizen satisfaction with mandatory adoption of an e-government technology. Journal of the Association for Information Systems ; 11(10): 519-549 Chase, R. B., & Apte, U. M. (2007). A history of research in service operations: What's the?big idea??. Journal of Operations Management, 25(2), 375-386. Cooper, W. W., Seiford, L. M., & Tone, K. (2007). Data envelopment analysis: a comprehensive text with models, applications, references and DEA-solver software. Springer. Dawes, S. S. (2009). Governance in the digital age: A research and action framework for an uncertain future. Government Information Quarterly, 26(2), 257-264. EUROSTAT (2009). Eurostat Macroeconomic aggregates at constant prices, breakdown by 31 branches, http://epp.eurostat.cec.eu.int FreshMinds (2006). Measuring Customer Satisfaction: A review of approaches. Retrieved at: http://www.lge.gov.uk/idk/aio/4709438 (May, 2011). Irani, Z., Love, P. E., Elliman, T., Jones, S., & Themistocleous, M. (2005). Evaluating e government: learning from the experiences of two UK local authorities. Information Systems Journal, 15(1), 61-82. Irani Z, Elliman T, Jackson P.(2007) Electronic Transformation of Government in the UK. European Journal of Information Systems ; 16(4): 327-335. Irani Z, Love P, Jones S.(2008) Learning lessons from evaluating eGovernment: Reflective case experiences that support transformational government. The Journal of Strategic Information Systems; 17(2): 155-164 Irani, Z., Weerakkody, V., Kamal, M., Hindi, N. M., Osman, I. H., Anouze, A. L., & AlAyoubi, B. (2012). An analysis of methodologies utilised in e-government research: A user satisfaction perspective. Journal of Enterprise Information Management, 25(3), 298-313. Jaeger P, Bertot J.(2010). Designing, Implementing, and Evaluating User-centered and Citizen-centered E-government. International Journal of Electronic Government Research; 6(1): 1-17. Kim, T. H., Im, K. H., & Park, S. C. (2005). Intelligent measuring and improving model for customer satisfaction level in e-government. In Electronic Government(pp. 38-48). Springer Berlin Heidelberg. Lee H, Irani Z, Osman I, Balci A, Ozkan S, Medeni T.(2008) Research Note: Toward a Reference Process Model for Citizen Oriented Evaluation of E-Government Services. Transforming Government: People, Process and Policy ; 2(4): 297-310. Millard, J. (2006). User attitudes to e-government citizen services in Europe. International Journal of Electronic Government Research, 2(2), 49-58. MILLARD J.(2008).E-GOVERNMENT MEASUREMENT FOR POLICY MAKERS. EUROPEAN JOURNAL OF EPRACTICE; 4. Miller, T. I., & Miller, M. A. (1991). Standards of excellence: US residents' evaluations of local government services. Public Administration Review, 503-514. Narasimhan, R., Talluri, S., Sarkis, J., & Ross, A. (2005). Efficient service location design in government services: a decision support system framework. Journal of Operations Management, 23(2), 163-178. Osman I.H., Anouze A., Irani Z, Lee H, Weerakkody V., H .Lee, A. Balc?, T.D. Medeni, V. Weerakkody. (2011). A new COBRAS framework to evaluate e-government services: a citizen centric. Proceedings of tGov Workshop?11 (tGOV11), March 17-18, Brunel University, West London, UK, 2011. Osman I.H., Anouze A., Irani Z, Al-Ayoubi, B Lee, H., Balc?, A., Medeni, T.D., and Weerakkody, Vishanth . (2014). COBRA Framework to Evaluate E-Government Services: A Citizen-Centric Perspective. Forthcoming in Government Information Quartely. Osman, I. H., Anouze, A. l., Azad, B., Daouk, L., Zablith, F., Hindi, N. M., Irani Z, Lee H, Weerakkody V. (2013). The elicitation of key performance indicators of e-government providers: A bottom up approach. European, Mediterranean & Middle Eastern Conference on Information Systems 2013. Windsor. Ross, A. D., & Droge, C. (2004). An analysis of operations efficiency in large-scale distribution systems. Journal of Operations Management, 21(6), 673-688. Rowley J.(2011).e-Government stakeholders- Who are they and what do they want?. International Journal of Information Management; 31(1): 53-62 Sun, S. (2004). Assessing joint maintenance shops in the Taiwanese Army using data envelopment analysis. Journal of Operations Management, 22(3), 233-245. United Nations. Department of Economic. (2010). United Nations E-Government Survey 2010: Leveraging E-Government at a Time of Financial and Economic Crisis (Vol. 10). United Nations Publications. Wang L , Bretschneider S , Gant J ( 2005 ). Evaluating web-based e-government services with a citizen-centric approach . Proceedings of 38th Hawaii International Conference on System Sciences , January 3- 6 , Hawaii-USA. Wang Y , Liao Y ( 2008 ). Assessing e-government systems success: A validation of the DeLone and McLean model of information system success . Government Information Quarterly ; 25 ( 4 ): 717 - 733 . Welch E , Hinnant C , Moon M ( 2005 ). Linking citizen satisfaction with e-government and trust in government . Journal of public Administration Research and theory ; 15 ( 3 ): 371 -391 Weerakkody V , Dhillon G. ( 2008 ). Moving from e-government to t-government: A study of process reengineering challenges in a UK local authority context . International Journal of Electronic Government Research 2008 ; 4 ( 4 ): 1 - 16 .

This is a preview of a remote PDF: http://eujournal.org/index.php/esj/article/viewFile/3622/3421

Ibrahim H. Osman, Abdel Latef Anouze, Nitham Mohammed Hindi, Zahir Irani, Habin Lee, Vishanth Weerakkody. I-MEET FRAMEWORK FOR THE EVALUATION EGOVERNMENT SERVICES FROM ENGAGING STAKEHOLDERS’ PERSPECTIVES, European Scientific Journal, 2014,