WP4 Models and Contents Quality Assessment Stefania Gnesi CNR WP4 Models and Contents Quality Assessment
AGENDA WP 4: Motivation and objectives Timing and deliverables Activity in tasks Task 4.1: Formal Verification of Business Processes Task 4.2: Linguistic Quality Evaluation Task 4.3: Feedback-based Quality Evaluation Conclusions
WP 4:Models and Contents Quality Assessment WP4 investigates on the quality assessment of the BP model specifications, and its related learning contents. The quality assessment will be based on both: formal verification and natural languages processing techniques.
WP4: objectives The collaborative approach to the definition of models and contents makes highly desirable the availability of automatic tools for quality assessment of BPs so to direct the activity of the community in order to have better learning material. Natural languages processing techniques will be applied for quality assessment of collaboratively provided contents to investigate on interaction patterns for BPs modeling typically used within the public administrations. 3 different tasks: Task 4.1: Formal Verification of Business Processes Task 4.2: Linguistic Quality Evaluation Task 4.3: Feedback-based Quality Evaluation
WP 4 Gantt
Deliverables D4.1: Quality Assessment Strategies for BP Models (UNICAM) (M18) The deliverable will report the results of the formal verification strategies research activities, and it will specifically describe the verification strategies which will be included within the Learn PAd platform.
D4.2: Quality Assessment Strategies for Contents (CNR) (M21) Deliverables D4.2: Quality Assessment Strategies for Contents (CNR) (M21) The deliverable will report the results of the linguistic quality assessment research activities, and it will specifically describe the strategies which will be included within the Learn PAd platform.
Deliverables D4.3: Quality Assessment Mechanisms Implementation (CNR) (M27) This deliverable will be constituted by the final release of the software quality assessment mechanisms included in the Learn PAd platform.
Task 4.1: Formal Verification of Business Processes The business process model must be analysed and improved to make sure (i) It actually includes all desired instances and (ii) It does not contain any undesired properties: Definition of modeling guidelines for process expert. Basic guidelines should address the way of graphically designing the BP model, and the way of editing the text in the BP model. Guidelines ease the automatic processing task. Standard verification activities for business process models will be implemented by mapping the BP in Petri Nets based notations. Verification activities will also consider possible data structure specified within BP models. In a PA this information typically relates to documents for which the status change according to the different activities performed by one office.
Verification of Process Models Relevant Properties: Soundness Option to Complete: a process instance, once started, can always complete No dead activities: a process model does not contain any dead activity, i.e., for each activity there exists at least one completed trace producible on that model and containing this activity Proper Completion: when a process instance completes there exists no related activity of this instance which is still running or enabled
Mapping BPMN models to Petri Net Verification of Process Models Into Practice NoMagic Modelling Environment BOC Modelling Environment XWIKI pages As soon as the models satisfy the requested properties BP Verification Mapping BPMN models to Petri Net Model Unfolding Properties Assessement
Task 4.2: Linguistic Quality Evaluation This task aims at: defining and implementing automated procedures to verify that the textual content that describes the tasks of a business process (i.e., the XWiki documents) provides information that is consistent with respect to the business process model itself: automatically identifyng ambiguous sentences and vague terms in natural language descriptions, and estimates quantitative indexes concerning the linguistic quality of the contents.
NLP Rule-based Machine Learning Domain Jargon Ambiguity Readability
Task 4.2: Linguistic Quality Evaluation Categories of lexical and syntactical ambiguities Optionality: unclear optional choices (and/or, if necessary, optionally) Subjectivity: usage of terms that may involve the judgment of the reader (similar, take into account, better, as possible) Vagueness: usage of generic terms (acceptable, accurate, relevant, effective) Weakness: usage of verbs that indicate possibility (can, could, might, would)
Task 4.2: Linguistic Quality Evaluation Experiments with the QuARS tool for ambiguity detection requirements have been performed on a set of PA documents: novel classes of linguistic problems have to be defined for PA documents with respect to those classes detected through QuARS. a set of PA documents has been retrieved from the Web, and a set of interviews have been performed with civil servants to highlight linguistic problems that are typical of PA documents.
Task 4.2:Questionnaire Results (Partial)
Task 4.2: Linguistic Quality Evaluation .
Task 4.3: Feedback-based Quality Evaluation The plan is to define a collaborative assessment of linguistic content basing on Machine Learning approach: to employ user feedbacks to define guidelines for writing high-quality Xwiki documents from the point of view of the user. to define a human annotated data-set to train machine learning algorithms that automatically identify poorly written (e.g., incomplete, difficult to understand, poorly structured) XWiki documents.
Thank you for your attention! Questions?
Software Tool Chain in Detail