Home Events SQRL seminars
SQRL seminars



Title Analysis of Interrogator-Tag Communication Protocols
Speaker Bojan Nokovic, McMaster University.
Date 8 December 2010
Abstract In this talk we discuss and analyze three different Interrogator-Tag communication protocols. The first protocol is used in AMQM (Automatic Mail Quality Measurement) systems. The second protocol is based on the ISO 18000-7 standard, which specifies protocol and parameters for active RFID (Radio Frequency Identification) air interface communication at the ISM (Industrial Scientific Medical) band. The third protocol is an AMQM protocol with some features of the ISO 18000-7 standard added. Quantitative properties of the protocols are analyzed from models. The main goal of modelling is to analyze tag message collision probability and power consumption.  The model is verified by the PRISM probabilistic model checker.
Biography Bojan Nokovic is a Ph.D. student in Software Engineering at McMaster.  He spends one day per week at Lyngsoe Systems (Mississauga), with whom this project is carried out. Bojan holds a BSc Degree in Telecommunication from the University of Sarajevo (Bosnia-Herzegovina), and a Master of Engineering Entrepreneurship and Innovation from McMaster University. He is a licensed P.Eng. in the Province of Ontario.
Slides pdf
Title Developing a Signal Control System using Event-B
Speaker Dr Thai Son Hoang, Senior Researcher & Lecturer in Information Security, ETH, Zurich
Date 8 November 2010
Abstract In this presentation, I first introduce Event-B, a forma modelling language for discrete transition systems. Important features of Event-B include the use of set theoretical concepts for modelling and step-wise refinement during development.  Furthermore, I present a guideline on how to develop control system working together with a
possibly fragile environment. Finally, I show an example of developing a signal control system using our approach.
Biography Thai Son Hoang is currently a senior researcher at Swiss Federal Institute of Technology, Zurich (ETH-Zurich), Switzerland. He studied undergraduate at the University of New South Wales (UNSW), Australia. He received his Ph.D at UNSW in 2006. He has worked initially as an academic guest, then as a post-doc and a lecturer at
Department of Computer Science, ETH Zurich since 2005. He was a member of the team developing the RODIN Platform for Event-B at ETH Zurich from 2005 to 2007.  His research interests and competence include formal modelling, formal verification and developing tool supports.
Slides pdf
Title Identification of Distributed Features in SOA Using Pattern Mining
Speaker Anis Yousefi
Date 31 August 2010
Abstract Feature identification has long been known in the reverse engineering domain as "a technique to identify the source code constructs activated when exercising one of the features of a program". Conventionally, feature identification has been considered in the scope of monolithic systems where static, dynamic, and textual approaches have been proposed to locate feature specific code. In this research, we propose a dynamic analysis approach to identify the implementation of distributed features in SOA-based systems. Furthermore, we identify dynamic behavior of features by analyzing their execution traces. In this approach, feature-specific scenarios are exercised on the instrumented system and the generated execution traces are mined to discover call graphs associated with the exercised feature.
Biography Anis is a Ph.D. candidate in Computing and Software at McMaster
Slides ppt
Title Helios: A Distributed, Large, High-Resolution Graphical Display
Speaker Jason Costabile, Matthew Dawson, Philip Deljanov-Harrak
Date 17 August 2010
Abstract Visualizing large sets of data using conventional display technologies poses many challenges to the user attempting to understand and manipulate the data.  VIDALab (Visual Design and Analysis Laboratory) attempts to solve these issues by using commodity computer hardware and monitors to form a large and cohesive high-resolution display wall.  Due to the use of multiple computers to drive the large number of monitors, running applications that span across the whole display wall becomes a challenge.  Project Helios aims to create a distributed graphical display server that works across several monitors connected to computers to create a single unified multi-user desktop environment.  Programming applications using the Helios graphical library allows the programmer to use a more conventional single machine programming model, hiding the details of distributing the data over multiple computers.  The developed application can then be used on a large multi-system display solution. Helios also aims to aid in developing applications for simultaneous multi-user desktops and to improve collaboration support between people.
Title Nuclear Safety Analysis Software: Application Requirements and
Qualification Needs
Speaker John C.Luxat, Ph.D., P.Eng.
Professor & NSERC/UNENE Industrial Research Chair in Nuclear Safety Analysis
Date 2 July 2010
Abstract Nuclear safety analysis software is used extensively in Canada’s nuclear industry to predict the transient behavior of nuclear power reactors and associated systems and components under plant upset and accident conditions.  This analysis is required to verify the adequacy of design;  provide information for developing the safe operating envelope (SOE) of nuclear stations;  obtain license approvals from the nuclear regulator; and provide information for emergency planning purposes.

Key elements of software qualification are demonstrating confidence that the software is appropriate for the intended applications, which involves verification and validation (V&V) activities, and demonstrating that the software development, maintenance and usage processes are capable of assuring configuration management for the entire software life-cycle.

This seminar will describe some of the typical attributes and applications of nuclear safety analysis software;  discuss experience and lessons learned from a major software qualification project conducted in Ontario Hydro/Ontario Power Generation (the V&V Project) which led to the establishment of the Industry Standard Toolset (IST); and discuss the ways in which software engineering research can contribute to major improvements in software qualification of this class of scientific & engineering software.
Biography Dr. John C. Luxat is Professor in the Department of Engineering Physics and holds the NSERC/UNENE Industrial Research Chair in Nuclear Safety Analysis at McMaster University.   He currently has a very active research program at McMaster studying many aspects of the safety of current and future reactors and nuclear fuel cycles.  He is the Principal Investigator for the Ontario funded “Nuclear Ontario” university research network and for the recently announced Canada Foundation for Innovation (CFI) project to establish the Centre for Advanced Nuclear Systems at McMaster University.

Prior to joining McMaster University in 2004 he had 32 years experience working in many areas of nuclear safety and nuclear engineering in the Canadian nuclear industry, as Vice President, Technical Methods at Nuclear Safety Solutions Limited in Toronto and, prior to that, as Manager of Nuclear Safety Technology at Ontario Power Generation.  He consults to numerous Canadian companies on nuclear safety and nuclear engineering issues and has provided advice to government organizations at the national and provincial level. In 2008 he was appointed by the Government of Alberta to the Nuclear Power Expert Panel that prepared a report advising Alberta Energy on nuclear power.

He is a member of the Board of Directors of Atomic Energy of Canada Limited, the Advisory Board of the International Association for Structural Mechanics in Reactor Technology (IASMiRT), and is a member of the Canadian Nuclear Society and the American Nuclear Society.   He served as the 2005/06 President of the Canadian Nuclear Society (CNS). In 2009 he was elected to the Executive Committee of the Thermalhydraulics Division of the American Nuclear Society and is the Student Advisor for the American Nuclear Society Student Chapter at McMaster University, the first such chapter in Ontario.

In 2004 he was awarded the Canadian Nuclear Society/Canadian Nuclear Association Outstanding Contribution Award for his significant contributions to safety analysis and licensing of CANDU reactors.

He obtained his B.Sc. (Eng.) and M.Sc. (Eng.) degrees in Electrical Engineering from the University of Cape Town, South Africa in 1967 and 1969, respectively.  In 1972 he obtained his Ph.D. degree in Electrical Engineering from the University of Windsor, Ontario
Slides pdf
Title Adding Dynamic Scheduling to Safety-Critical Systems
Speaker Luis Almeida, University of Porto, Portugal
Date 4 June 2010
Abstract The design of safety-critical systems has typically adopted static techniques to simplify error detection and fault tolerance. However, economic pressure to reduce costs is exposing the limitations of those techniques in terms of efficiency in the use of system resources. In some industrial domains, such as the automotive, this pressure is too high, and other approaches to safety must be found, e.g., capable of providing some kind of fault tolerance but with graceful degradation to lower costs, or also capable of adapting to instantaneous requirements to better use the computational/communication resources.

This paper analyses the development of systems that exhibit such level of flexibility, allowing the system configuration to evolve within a well-defined space. Two options are possible, one starting from the typical static approach but introducing choice points that are evaluated only at runtime, and another one starting from an open systems approach but delimiting the space of possible adaptations. The paper follows the latter and presents a specific contribution, namely, the concept of local utilization bound, which supports a fast
and efficient schedulability analysis for on-line resource management that assures continued safe operation. Such local bound is derived off-line for the specific set of possible configurations, and can be significantly higher than any generic non-necessary utilization bound such as the well known Liu and Layland's bound for Rate-Monotonic scheduling.
Biography Luis Almeida is currently an associate professor at the Electrical and Computer Engineering Department of the University of Porto and a
member of the Telecommunications Institute in Porto and of the Electronics and Telematics Engineering Institute of Aveiro in which
he coordinated the Electronic Systems Lab between 2003 and early 2008. He is also a member of the IEEE, Computer Society, member of the Strategic Management Board of the EU/ICT NoE ArtistDesign and leader of the Real-Time Networks activity in that NoE, member of the IEEE TCs on Real-Time Systems and on Factory Automation (SC on Real-
Time and Fault-Tolerant Systems), of the IFIP TC10.2 (WG on Embedded Systems) and a Trustee of the RoboCup Federation.

His current interests are in real-time communication protocols for embedded systems with an emphasis on mechanisms to support predictable operational flexibility as needed for dynamic QoS management, graceful degradation and open distributed real-time systems in general. He is also interested in control architectures for teams of autonomous mobile robots, focusing on distributed architectures to support global coordination and data fusion, and in flexible control approaches, particularly for networked control. He regularly contributes to the programs and to the organization of scientific events in the Real-Time Systems, Factory Automation and Robotics communities, including RTSS, ECRTS, DATE, SIES, WFCS, ETFA and RoboCup.
Title How Can we Do better? Tuning Knowledge building interaction for Creativity, Innovation and Quality of Life
Speaker dr mc schraefel, phd, University of Southampton, UK
Date 10 May 2010
Abstract While most of us might want  information seeking interactions as if we were on board the Starship Enterprise, asking the computer to help gather and analyze data so we could work out what if scenarios through wonderful 3D holographics, most of our current information seeking actions are more reminiscent of late 19th Century book requests to the British Library. That is, fill out a form for a resource; have that resource or set of resources delivered back to us. Each time we make a request, too, we start a new form; new search. Little or no context is maintained. No information kept. The only added feature to the process is if the clerk processing the requests gets to know the visitor, books s/he thinks the visitor might find more interesting based on past requests get put to the top of the stack. So while we're not in the 23rd Century, how might we advance our knowledge building interactions beyond the 19th? Some of us think part of the solution is in the preponderance of newly available linked data - including personal, social and public information, and the metadata around that information.

I'd also like to suggest that there is value in considering not only how to make information more accessible, useful and usable to the mind that will build knowledge, but also how we might incorporate the knowledge that our brains are embodied into our design process. Most knowledge development systems tend either to ignore or to work around our physicality like it's an inconvenience to our work. Recent neural research however suggests that by acknowledging the body, working with the sensory-motor system, we can improve our knowledge building endeavors.

In this presentation i'd like to explore some approaches we've been considering towards different kinds of information interaction and information awareness and how these approaches may contribute to better knowledge building, and potentially, enhanced creativity, innovation and quality of life.
Biography My current post is Reader in the Intelligence Agents Multimedia (IAM) Group, University of Southampton, UK, where I lead the Information Interaction & Well Being Computing Theme. I'm also a fellow of the British Computer Society and a Senior Research Fellow of the Royal Academy of Engineering.
My work in information interaction focuses on information exploration to support a leveraging what a person knows to support building knowledge about what they'd like to know. More recently our group's work is also investigating new framings of information integration and interaction to support creativity, innovation and wellbeing. Beyond academic degrees, as part of my interest in considering how to design for "embodied brains" (rather than just brains) i've earned a number of qualifications in strength, conditioning, movement and neural reeducation training. More detail is available on at my uni website: http://www.ecs.soton.ac.uk/~mc
Title Towards Search Carrying Code
Speaker Ali Taleghani, University of Waterloo
Date 16 February 2010
Abstract In this talk I will introduce search-carrying code, by which a code consumer certifies an acquired program via software model checking. To speed-up the certification task, the code producer provides a search script that encodes the search of the program's reachability graph. The code consumer uses the search script to direct its search of the reachability graph and reduce search time by avoiding hash- table lookups. Sequential search-carrying code achieves search time savings of up to 20%, but more substantial time savings are possible by extending the ideas to parallel model checking. In this case, the code producer supplies a collection of search scripts, each of which covers a region of the program's state space. Using a combination of search-carrying code and parallel model checking, we show to reduce certification times by a factor of up to 1.3n using n parallel processors.
Biography Ali Taleghani is a PhD candidate at the University of Waterloo
Title Bits of Evidence: What We Actually Know About Software Development,
and Why We Believe It's True
Speaker Greg Wilson, University of Toronto
Date 10 February 2010
Abstract By the time the Seven Years War ended in 1763, Britain had lost 1512 sailors in action, but almost 100,000 to scurvy despite the fact that the Scottish surgeon James Lind had  shown twenty years earlier that a little lemon juice every day was enough to prevent or cure the dreaded ailment.  It was more than a century before medical practitioners
began paying attention to controlled trials of this kind: as recently as the 1950s, many doctors rejected statistical results linking smoking to cancer, saying that what happened "on average" was of no help when they were faced with a specific patient. Today, though, most practitioners accept that decisions about the care of individual patients should be based on conscientious, explicit, and judicious use of current best evidence.

The idea that claims about software development practices should be based on evidence is still foreign to software developers, who often talk as if a beer and an anecdote constituted proof. This is finally starting to change: any academic who claims that a particular tool or practice makes software development faster, cheaper, or more reliable is now expected to back up that claim with some sort of empirical study. Such studies are difficult to do well, but hundreds have now been published covering almost every aspect of software development. This talk will look at some of the best of those studies, which are as elegant as classic experiments in physics, psychology, and other scientific disciplines.
Greg Wilson holds a Ph.D. in Computer Science from the University of Edinburgh, and has worked on high-performance scientific computing, data visualization, and computer security.  He is now an Assistant Professor in Computer Science at the University of Toronto, where his primary research interest is lightweight software engineering tools.  Greg is on the editorial board of "Computing in Science and Engineering"; his most recent books are "Data Crunching" (Pragmatic, 2005), "Beautiful Code" (O'Reilly, 2007), and "Practical Programming" (Pragmatic, 2009).
Title Assurance Based Development of Critical Systems
 John C. Knight, Department of Computer Science, University of Virginia
Date 30 July 2009
Abstract The popularity of safety and other assurance arguments as a principal strategy in the certification of safety-critical systems has given rise to an urgent need for engineering processes that facilitate the synergistic development of a system and its safety case. In this presentation, I will describe Assurance-Based Development (ABD), a concept in which synthesis produces a detailed process that is tailored to a particular application and that simultaneously generates both a system and its assurance argument.  I will introduce the concept of a success argument, an evolving argument that the engineering effort under way will lead to an acceptable system in an acceptable time and with acceptable cost.  I will describe the ABD decision mechanism underlying process synthesis, in which the evolving product assurance and success arguments guide the formulation of the evolving concrete development process.  In ABD, completing the incomplete portions of the product assurance and success arguments reveals the obligations that the detailed process has to meet. The detailed process, in turn, returns the evidence needed to complete the incomplete portions of the arguments. I will illustrate the ideas with examples taken from a case study of a medical device.
Biography John Knight is a professor of computer science at the University of Virginia. He holds a B.Sc. (Hons) in Mathematics from the Imperial College of Science and Technology (London) and a Ph.D. in Computer Science from the University of Newcastle upon Tyne. Prior to joining the University of Virginia in 1981, he was with NASA's Langley Research Center.

Dr. Knight's research interests are in software dependability and security. He is currently working on projects in safety-critical embedded systems and the security of critical networked applications. Specific research topics include techniques for practical formal verification, the use of safety and assurance arguments to guide software development, secretless security, and Helix, a self-regenerative architecture for the incorruptible enterprise.

From 2001 to 2005 Dr. Knight served as Editor in Chief of the IEEE Transactions on Software Engineering, and he is a member of the editorial board of the Empirical Software Engineering Journal.  He was the General Chair of the 2000 International Symposium on the Foundations of Software Engineering, and he was the General Chair of the 2007 International Conference on Software Engineering.  In 2006, he was the recipient of the IEEE Computer Society's Harlan D. Mills Award, and in 2008 he was the recipient of the Distinguished Service Award of the ACM's Special Interest Group on Software Engineering.
Last Updated on Tuesday, 10 May 2011 15:37