E- Learning
Paper instructions:
I need a research paper on an evaluation plan to be conducted on a model for assessing readiness of faculty members in crises countries such as Libya, Tunisia and Egypt. The model has been developed with its Assessment Measurement Instrument and the methods chosen for evaluation are case studies.
However, in my report I am following a Design Science Research Methodology, where evaluation of models is carried out in two parts:
1. Artificial evaluation- Judges a model in a contrived and non-realistic way (A single case study to be used in a private University).
2. Naturalistic evaluation- Naturalistic evaluation considers real users in real contexts, i.e. evaluation “within an organisation” (Multiple case studies and focus group discussion to be used).
Case studies- Three universities U1, U2 and U3 have been chosen to be used as in Libya, Tunisia and Egypt, respectively.
Focus groups discussions- to be used at the end to discuss the result of implementing the model and the instrument in real context.
What is needed is a plan or the steps to be carried out as part of evaluating the readiness model and its assessment instrument using the two above approaches. In other word, what do you usually to test something using case studies?
I have attached the model and the instrument in File 1 and two examples, Ex1 and Ex2.
Based on the review of e-Learning Readiness Models, the artefact (Model) eLearning Readiness Assessment Model illustrated in Figure 1 below is developed to guide the study. The proposed model is an eclectic model for assessing faculty e-Learning readiness. The five main parameters that are used to develop the hybrid model are; Personal readiness, attitudinal readiness, Institutional readiness, Technological readiness and cultural readiness (social readiness).
Appendix 1 lists the finalised items constituting the Assessment Measurement Instrument. Participants will report their perceptions on these items with a five-point Likert-scale from 1 to 5, where 1 being “strongly disagrees” and 5 being “strongly agree”. The aspects of readiness can be measured with the readiness level being bench marked as: ‘not ready lot of work to be done’ for items between 1 and 2.6, ‘not ready some work to be done’ for items between 2.6 and 3.4, ‘expected level of readiness’ for items on 3.4, ‘ready but need some improvements’ for items between 3.4 and 4.2 and ‘ready to go’ for items between 4.2 and 5. Aydın and Tasci identification of “expected level of readiness” for e-Learning which is defined as the mean score of 3.40 will be adopted in order to evaluate the survey results as part of the artefacts evaluation. Figure 2 below illustrates the levels of measuring readiness.
Figure 3: Levels of measuring e-readiness (Aydın and Tasci, 2005)
APPENDIX 1: Faculty Readiness Assessment Measurements Instrument
Readiness Category Readiness Factor No. Items
Personal ITE 1 I use e-mail as the main communication tool with my students and colleagues
2 I use office software (e.g., MS Word, MS Power Point, etc.) for content delivery and demonstration
3 I use social network sites (e.g., Facebook, Twitter, Skype, etc.)
ATT 4 I am aware of what e-Learning is and of its benefits
5 I support and intend to use e-Learning in my teaching
6 E-Learning will create more stress for me as an instructor
SE 7 I feel confident in my ability to use e-Learning in teaching
8 I can teach myself most of the things I need to know about using e-learning
9 I am hesitate to use e-Learning for fear of making mistakes
G 10 Gender dissimilarity makes no difference in the adoption of e-Learning
11 E-Learning is a good teaching tool to resolve gender segregation in HEIs, in the region
12 Female faculty members most beneficiary of using e-Learning
Attitudinal Beliefs PU 13 Adopting e-Learning will allow me to accomplish teaching tasks effectively and quicker
14 Using e-Learning system will improve my teaching performance
15 Adoption of e-Learning in the region will have a positive effect on the educational process
PEOU 16 Learning the use of e-Learning will be easy for me
17 Using e-Learning systems is easy to use
18 Using e-Learning will make it easier for me to teach course content
PF 19 Adopting e-Learning will offer me flexibility in learning as to time and place
20 Adopting e-Learning will save me time and effort commuting to the university
21 Adopting e-Learning will allow me control over my teaching
Institutional PD 22 I need extensive training to develop e-content
23 My university provides training to prepare faculty using e-Learning
24 My university is willing to invest in my future professional development
LS 25 University management is enthusiastic about e-Learning
26 Management support is strong at my college/department
27 My college/department has adequate financial resources to develop technology-based initiatives
IN 28 Greater financial rewards are needed to get faculty to participate in e-Learning
29 Greater release time is needed to get faculty to design and deliver e-Learning courses
30 I am morally motivated and encouraged to use e-Learning
Technological ACC 31 The cost of connecting to the internet is affordable
32 I have an easy access to the internet with good speed in the university
33 I am very satisfied with the networked infrastructure at my university
TS 34 I do not face any technical problems while using the internet
35 A help desk is available at my department when there is technical problem
36 An able and effective technical team is always available at my university
ISS 37 Information Security is a major concern with e-Learning systems
38 Extra security precaution is needed with e-Learning before I would feel comfortable using e-Learning in my teaching
39 I am satisfied with the security provided by my university to protect information
Cultural LANG 40 Use of local Arabic Language is important for e-Content development and teaching
41 Use of foreign Language such English and French simplifies and improves teaching online
42 My students won’t accept learning by languages other than Arabic
SC 43 E-Learning is seen as a cultural invasion by society in the region
44 Religious view in the region could be an obstacle to adopting e-Learning
45 My university culture does not dupport e-Learning
SI 46 I will take my family view regarding using e-Learning
47 People whom opinions I value would think I should use e-Learning in my teaching
48 Whether I decide to adopt e-Learning for teaching is entirely up to me
Demonstration and Evaluation
The aim of the demonstration and evaluation phase of DS methodology is to demonstrate that the artefact (DQ methodology) feasibly works to achieve its objectives (stated in section 3) in at least one context. Additionally, it considers how well developed DQ method support solution to the problem. The utility, quality, and efficacy of the method must be rigorously demonstrated via well-executed evaluation methods. In order to give answer to the third research question (RQ3)(Section 3) a combination of quantitative and qualitative analysis techniques are and will be used. The main techniques for evaluating this research are set to be case studies and focus groups [68]. Further information about evaluating and expected outcomes will be given in the following section.
Case Studies
To objectively evaluate the research artifact, a case study method has been selected. Case study is an empirical inquiry that investigates a phenomenon within its real-life context [68]. To evaluate the DQ methodology in an operational manner I plan to engage it into a real world cases. Part of the methodology is expected to be implemented as prototypical software tool. This tool can be exclusively implemented for the particular case (as it will be shown later) or brought of the shell, following the DQ requirements and available SOA infrastructure. Data collection from executing different scenarios will be done following the general principles of conducting case studies. More particularly information will be gather in form of personal recording, open ended, formal and informal interviews, charts graphs as well as environmental observations.
2.1.1 Case study Dynamic Open Home-Automation (DOHA).
At present time, I am working in close cooperation with the University of Granada on a project called Dynamic Open Home-Automation (DOHA) [77]. DOHA is a SOA-based platform for the access, control and management of home-automated systems, composed of a set of lightweight and independent services (devices). The collaboration between services in DOHA is resolved based on the Peer-to-Peer (P2P) orchestration principles in which the interaction control and execution order of operations, messages and transactions required in the collaboration, is the responsibility of each service. The DOHA composition is shown on fig. 8 divided into service, virtual and physical layers. Since the goal is to detect semantic data inaccuracies, of interest in this case is the service level.
I intend to apply my methodology in the following way: Information will be profiled for the given case using the template proposed in Section 5.1. Next, in the definition stage, profiled data will be used to construct the predicates. The process of construction will be empowered by ontology about the environment. At this stage, I plan that ontology will be built by administrator using Protégé [78] software. After predicates are built and stored in a comprehensive repository, the execution is followed. The process of execution will transform the predicates into readable for services envelops. Then the latter will be dispatched through the infrastructure using JXTA [79] message standard. Then the results of the execution will be saved in to log files. The process concludes with reading the aforementioned and generating reports that will be analysed. To encapsulate everything, in this case, an exclusive module (“quality predicate manager” fig. 8) needs to be implemented and integrated within the DOHA middleware that will eventually ensure efficient detecting of poor semantic data.
Except for evaluation the artifact I am also planning to apply the practitioner cycle (See figure 3) in a way of constant discussions with the partners from University of Granada, which would positively increase the quality of the artifact.
2.1.2 Case study – Goods Retailer Web service Composition
Another case that I might apply this methodology is presented in Appendix 3. The diagram depicts a web service oriented configuration. In this case, the cooperation between services based on the centralized orchestration principles (ESB). Also, instead of storing the information about each process in every service, a special module (BPE) is responsible for managing and execution all the processes. The latter are also stored in a business process repository. Additional information about every process is stored in business rule repository. The communication protocol in this case is SOAP. Following DQ methodology, in the preparation stage, quality predicates will be composed based on the information stored for particular process. Then predicates will be converted in SOAP envelops and dispatched using the BPE. Consequently, they will be stored in practical oriented log file.
2.2 Focus groups
After a case is deployed, a number of scenarios will be developed. Based on them, operational criteria such as usability, efficiency and effectiveness will be evaluated using focus groups. In order to provide more objective results, people from these groups will be system architects manager, administrators and developers in charge of setting and implementing the methodology. A summary table with the criteria and method chosen is presented below.
Criteria Description Method Example
Usability The extent to which a methodology can be used by specified users to achieve specified goals in a specified case (employed by [80])
Modified positive System Usability Scale (SUS) approach [81]
Quantifying the usability by collecting simple pass/fail metrics and reporting completion rates Profiling data: Did the user/administrator identify the service name in the given scenario?
Building quality predicates: Did the user/administrator managed to build the quality rule to ensure the office temperature?
Efficiency Time required to deploy the methodology Measuring the total efforts needed in time and expenses. Time necessary to implement the module of constructing the quality predicates
Time required to execute the methodology Measuring the total time for execution including:
-time for profiling data
-time for building quality statements
-time for execution by the middleware and environment
-time for analysing Measuring the time needed to different users to perform same task/scenario, executed by the same composition.
Measuring the time needed to the system execute same task on the same scenario. Services’ response times are constant.
Effectiveness The extent to which a methodology can detect inaccurate semantic data analysing the content of the reports create in the end of the execution stage Analysing the report created by the user/administrator containing service name, process involved into error, violated value, etc.
Level Syntactic and Semantic(Rigorous) Pragmatic(Relevance)
Evaluation
Factors
Artifact
Packages Process quality requirements Research output requirements
Correctness Flexibility Understandability Simplicity Implementability Support Semi -structured Data Support distributive, cooperative, web and P2P types of IS Detect semantic inaccuracy
Profiling Information ▲ ▲ ∆ ▲ ? ▲ n/a n/a
Preparation Stage ∆ ▲ ∆ x ? ? ▲, ? n/a
Execution Stage ▲ ▲ ∆ x ? ? ▲, ? ∆
Analysing Stage ▲ ▲ ▲ ▲ ? ? ▲, ? ▲, ?
▲ – Fulfil the requirement; ∆ – partly fulfil the requirement x – do not fulfil the requirement; ?- results in progress n/a – not applicable
PLACE THIS ORDER OR A SIMILAR ORDER WITH US TODAY AND GET AN AMAZING DISCOUNT 🙂

+1 862 207 3288 