Case study. The Ada chatbot: personalised, AI-driven assistant for each student.

As part of the AI and vocational education and training project funded through the EU Erasmus plus project we are producing a series of case studies of the use of AI in VET in five European countries. Here is my first case study – the Ada chatbot developed at Bolton College.

About Bolton College

Bolton College is one of the leading vocational education and training providers in the North West of England, specialising in delivering training – locally, regionally and nationally – to school leavers, adults and employers. The college employs over 550 staff members who teach over 14,500 full and part time students across a range of centres around Bolton. The college’s Learning Technology Team has a proven reputation for the use of learning analytics, machine learning and adaptive learning to support students as they progress with their studies.

The Ada Chatbot

The Learning Technology Team has developed a digital assistant called Ada which went live in April 2017. Ada, which uses the IBM Watson AI engine, can respond to a wide range of student inquiries across multiple domains. The college’s Learning Technology Lead, Aftab Hussain, says “It transforms the way students get information and insights that support them with their studies.” He explains: “It can be hard to find information on the campus. We have an information overload. We have lots of data but it is hard to manage. We don’t have the tools to manage it – this includes teachers, managers and students.” Ada was first developed to overcome the complexity of accessing information and data.

Student questions

Ada is able to respond to student questions including:

  1. General inquiries from students about the college (for example: semester dates, library opening hours, exam office locations, campus activities, deadline for applying for university and more);
  2. Specific questions from students about their studies (for example: What lessons do I have today/this afternoon/tomorrow? Who are my teachers? What’s my attendance like? When is my next exam? When and where is my work placement? What qualifications do I have? What courses am I enrolled in? etc.)
  3. Subject specific inquiries from students. Bolton College is teaching Ada to respond to questions relating to GCSE Maths, GCSE English and the employability curriculum.

Personalised and contextualised learning

Aftab Hussein explains: “We are connecting all campus data sets. Ada can reply to questions contextually. She recognises who you are and is personalised according to who you are and where you are in the student life cycle. The home page uses Natural Language Processing and the Watson AI engine. It can reply to 25000 questions around issues such as mental health or library opening times etc. It also includes subject specific enquiries including around English, Mathematics and business and employability. All teachers have been invited to submit the top 20 queries they receive. Machine learning can recognise the questions. The technical process is easy.” However, he acknowledges that inputting data into the system can be time consuming and they are looking at ways of automatically reading course documentation and presentations.

All the technical development has been undertaken in house. As well as being accessible through the web, Ada, has both IOS and Android apps and can also be queried though smart speakers.

The system also links to the college Moodle installation and can provide access to assignments, college information services and curriculum materials. The system is increasingly being used in online tutorials providing both questions for participants and access to learning materials for instance videos including for health and social care.

It is personalised for individuals and contextualised according to what they are doing or want to find out. Aftab says: “We are looking at the transactional distance – the system provides immediate feedback reducing the transactional distance. “

Digital assessment

Work is also being undertaken in developing the use of the bot for assessment. This is initially being used for the evaluation of work experience, where students need to provide short examples of how they are meeting objectives – for example in collaboration or problem solving. Answers can uploaded, evaluated by the AI and feedback returned instantly.

Nudging

Since March 2019, the Ada service has provided nudges to students with timely and contextualised information, advice and guidance (IAG) to support their studies. The service nudges students about forthcoming exams, their work placement feedback and more. In the following example, a student receives feedback regarding his work placement from his career coach and employer.

The College is currently implementing ProMonitor, a service which will offer teachers and tutors with a scalable solution for managing and supporting the progress made by their students. Once ProMonitor is in place, Ada will be in a position to nudge students about forthcoming assignments and the grades awarded for those assignments. She will also offer students advice and guidance about staying on track with their studies. Likewise, Ada will nudge teachers and student support teams to inform them about student progress; allowing for timely support to be put in place for students across the College.

A personal lifelong learning companion

For Aftab Hussein the persona of the digital agent is important. “In the future”, he says, “every child will have a personal lifelong learning companion which will support teaching and learning.” He thinks they will probably come from the big platform suppliers. Children will have their digital assistant from age the age of 3 or 4 and will have a Personal Learner Number allowing data to be exchanged between different institutions,

 

 

 

Recognising competence and learning

As promised some further thoughts on the DISCUSS conference, held earlier this week in Munich.

One of the themes for discussion was the recognition of (prior) learning. The theme had emerged after looking at the main work of Europa projects, particularly in the field of lifelong learning. The idea and attraction of recognising learning from different contexts, and particularly form informal learning is hardly new. In the 1990s, in the UK, the National Council for Vocational Qualifications (as it was then called) devoted resources to developing systems for the Accreditation of Prior Learning. One of the ideas behind National Vocational Qualifications was teh decoupling of teaching and learning from learning outcomes, expressed in terms of competences and performance criteria. Therefore, it was thought, anyone should be able to have their competences recognised (through certification) regardless of whether or not they had followed a particular formal training programme. Despite the considerable investment, it was only at best a limited success. Developing observably robust processes for accrediting such learning was problematic, as was the time and cost in implementing such processes.

It is interesting to consider why there is once more an upsurge of interest in the recognition of prior learning. My feeling was in the UK, the initiative wax driven because of teh weak links between vocational education and training and the labour market.n In countries liek Germany, with a strong apprenticeship training system, there was seen as no need for such a procedure. Furthermore learning was linked to the work process, and competence seen as the internalised ability to perform in an occupation, rather than as an externalised series of criteria for qualification. However the recent waves of migration, initially from Eastern Europe and now of refugees, has resulted in large numbers of people who may be well qualified (in all senses of the word) but with no easily recognisable qualification for employment.

I am unconvinced that attempts to formally assess prior competence as a basis for the fast tracking of  awarding qualifications will work. I think we probably need to look much deeper at both ideas around effective practice and at what exactly we mean my recognition and will write more about this in future posts. But digging around in my computer today I came up with a paper I wrote together with Jenny Hughes around some of these issues. I am not sure the title helped attract a wide readership: The role and importance of informal competences in the process of acquisition and transfer of work skills. Validation of competencies – a review of reference models in the light of youth research: United Kingdom. Below is an extract.

“NVQs and the accreditation of informal learning

As Bjørnåvold (2000) says the system of NVQs is, in principle, open to any learning path and learning form and places a particular emphasis on experience-based learning at work, At least in theory, it does not matter how or where you have learned; what matters is what you have learned. The system is open to learning taking place outside formal education and training institutions, or to what Bjørnåvold terms non-formal learning. This learning has to be identified and judged, so it is no coincidence that questions of assessment and recognition have become crucial in the debate on the current status of the NVQ system and its future prospects.

While the NVQ system as such dates back to 1989, the actual introduction of “new” assessment methodologies can be dated to 1991. This was the year the National Council for Vocational Qualifications (NCVQ) and its Scottish equivalent, Scotvec, required that “accreditation of prior learning” should be available for all qualifications accredited by these bodies (NVQs and general national qualifications, GNVQs). The introduction of a specialised assessment approach to supplement the ordinary assessment and testing procedures used when following traditional and formal pathways, was motivated by the following factors:

1. to give formal recognition to the knowledge and skills which people already possess, as a route to new employment;
2. to increase the number of people with formal qualifications;
3. to reduce training time by avoiding repetition of what candidates already know.

The actual procedure applied can be divided into the following steps. The first step consists of providing general information about the APL process, normally by advisers who are not subject specialists, often supported by printed material or videos. The second and most crucial step includes the gathering and preparation of a portfolio. No fixed format for the portfolio has been established but all evidence must be related to the requirements of the target qualification. The portfolio should include statements of job tasks and responsibilities from past or present employers as well as examples (proofs) of relevant “products”. Results of tests or specifically-undertaken projects should also be included. Thirdly, the actual assessment of the candidate takes place. As it is stated:”The assessment process is substantially the same as that which is used for any candidate for an NVQ. The APL differs from the normal assessment process in that the candidate is providing evidence largely of past activity rather than of skills acquired during the current training course.”The result of the assessment can lead to full recognition, although only a minority of candidates have sufficient prior experience to achieve this, In most cases, the portfolio assessment leads to exemption from parts of a programme or course. The attention towards specialised APL methodologies has diminished somewhat in the UK during recent years. It is argued that there is a danger of isolating APL, and rather, it should be integrated into normal assessments as one of several sources of evidence.”The view that APL is different and separate has resulted in evidence of prior learning and achievement being used less widely than anticipated. Assessors have taken steps to avoid this source of evidence or at least become over-anxious about its inclusion in the overall evidence a candidate may have to offer.”We can thus observe a situation where responsible bodies have tried to strike a balance between evidence of prior and current learning as well as between informal and formal learning. This has not been a straightforward task as several findings suggest that APL is perceived as a “short cut”, less rigorously applied than traditional assessment approaches. The actual use of this kind of evidence, either through explicit APL procedures or in other, more integrated ways, is difficult to overview. Awarding bodies are not required to list alternative learning routes, including APL, on the certificate of a candidate. This makes it almost impossible to identify where prior or informal learning has been used as evidence.

As mentioned in the discussions of the Mediterranean and Nordic experiences, the question of assessment methodologies cannot be separated from the question of qualification standards. Whatever evidence is gathered, some sort of reference point must be established. This has become the most challenging part of the NVQ exercise in general and the assessment exercise in particular.We will approach this question indirectly by addressing some of the underlying assumptions of the NVQ system and its translation into practical measures. Currently the system relies heavily on the following basic assumptions: legitimacy is to be assured through the assumed match between the national vocational standards and competences gained at work. The involvement of industry in defining and setting up standards has been a crucial part of this struggle for acceptance, Validity is supposed to be assured through the linking and location of both training and assessment, to the workplace. The intention is to strengthen the authenticity of both processes, avoiding simulated training and assessment situations where validity is threatened. Reliability is assured through detailed specifications of each single qualification (and module). Together with extensive training of the assessors, this is supposed to secure the consistency of assessments and eventually lead to an acceptable level of reliability.

A number of observers have argued that these assumptions are difficult to defend. When it comes to legitimacy, it is true that employers are represented in the above-mentioned leading bodies and standards councils, but several weaknesses of both a practical and fundamental character have appeared. Firstly, there are limits to what a relatively small group of employer representatives can contribute, often on the basis of scarce resources and limited time. Secondly, the more powerful and more technically knowledgeable organisations usually represent large companies with good training records and wield the greatest influence. Smaller, less influential organisations obtain less relevant results. Thirdly, disagreements in committees, irrespective of who is represented, are more easily resolved by inclusion than exclusion, inflating the scope of the qualifications. Generally speaking, there is a conflict of interest built into the national standards between the commitment to describe competences valid on a universal level and the commitment to create as specific and precise standards as possible. As to the questions of validity and reliability, our discussion touches upon drawing up the boundaries of the domain to be assessed and tested. High quality assessments depend on the existence of clear competence domains; validity and reliability depend on clear-cut definitions, domain-boundaries, domain-content and ways whereby this content can be expressed.

As in the Finnish case, the UK approach immediately faced a problem in this area. While early efforts concentrated on narrow task-analysis, a gradual shift towards broader function-analysis had taken place This shift reflects the need to create national standards describing transferable competences. Observers have noted that the introduction of functions was paralleled by detailed descriptions of every element in each function, prescribing performance criteria and the range of conditions for successful performance. The length and complexity of NVQs, currently a much criticised factor, stems from this “dynamic”. As Wolf says, we seem to have entered a “never ending spiral of specifications”. Researchers at the University of Sussex have concluded on the challenges facing NVQ-based assessments: pursuing perfect reliability leads to meaningless assessment. Pursuing perfect validity leads towards assessments which cover everything relevant, but take too much time, and leave too little time for learning. This statement reflects the challenges faced by all countries introducing output or performance-based systems relying heavily on assessments.

“Measurement of competences” is first and foremost a question of establishing reference points and less a question of instruments and tools. This is clearly illustrated by the NVQ system where questions of standards clearly stand out as more important than the specific tools developed during the past decade. And as stated, specific approaches like, “accreditation of prior learning” (APL), and “accreditation of prior experiential learning” (APEL), have become less visible as the NVQ system has settled. This is an understandable and fully reasonable development since all assessment approaches in the NVQ system in principle have to face the challenge of experientially-based learning, i.e., learning outside the formal school context. The experiences from APL and APEL are thus being integrated into the NVQ system albeit to an extent that is difficult to judge. In a way, this is an example of the maturing of the system. The UK system, being one of the first to try to construct a performance-based system, linking various formal and non-formal learning paths, illustrates the dilemmas of assessing and recognising non-formal learning better than most other systems because there has been time to observe and study systematically the problems and possibilities. The future challenge facing the UK system can be summarised as follows: who should take part in the definition standards, how should competence domains be described and how should boundaries be set? When these questions are answered, high quality assessments can materialise.”

Open Learning Analytics or Architectures for Open Curricula?

George Siemen’s latest post, based on his talk at TEDxEdmonton, makes for interesting reading.

George says:

Classrooms were a wonderful technological invention. They enabled learning to scale so that education was not only the domain of society’s elites. Classrooms made it (economically) possible to educate all citizens. And it is a model that worked quite well.

(Un)fortunately things change. Technological advancement, coupled with rapid growth of information, global connectedness, and new opportunities for people to self-organized without a mediating organization, reveals the fatal flaw of classrooms: slow-developing knowledge can be captured and rendered as curriculum, then be taught, and then be assessed. Things breakdown when knowledge growth is explosive. Rapidly developing knowledge and context requires equally adaptive knowledge institutions. Today’s educational institutions serve a context that no longer exists and its (the institution’s) legacy is restricting innovation.

George calls for the development of an open learning analytics architecture based on the idea that: “Knowing how schools and universities are spinning the dials and levers of content and learning – an activity that ripples decades into the future – is an ethical and more imperative for educators, parents, and students.”

I am not opposed to what he is saying, although I note Frances Bell’s comment about privacy of personal data. But I am unsure that such an architecture really would improve teaching and learning – and especially learning.

As George himself notes, the driving force behind the changes in teaching and learning that we are seeing today is the access afforded by new technology to learning outside the institution. Such access has largely rendered irrelevant the old distinctions between formal, non formal and informal learning. OK – there is still an issue in that accreditation is largely controlled by institutions who naturally place much emphasis on learning which takes place within their (controlled and sanctioned) domain. yet even this is being challenged by developments such as Mozilla’s Open Badges project.

Educational technology has played only a limited role in extending learning. In reality we have provided access to educational technology to those already within the system. But the adoption of social and business software for learning – as recognised in the idea of the Personal Learning Environment – and the similar adaption of these technologies for teaching and learning through Massive Open Online Courses (MOOCs) – have moved us beyond the practice of merely replicating traditional classroom architectures and processes in technology.

However there remain a series of problematic issues. Perhaps foremost is the failure to develop open curricula – or, better put, to rethink the role of curricula for self-organized learning.

For better or worse, curricula traditionally played a role in scaffolding learning – guiding learners through a series of activities to develop skills and knowledge. These activities were graded, building on previously acquired knowledge in developing a personal knowledge base which could link constituent parts, determining how the parts relate to one another and to an overall structure or purpose.

As Peter Pappas points out in his blog on ‘A Taxonomy of Reflection’, this in turn allows the development of what Bloom calls ‘Higher Order Reflection’ – enabling learners to combine or reorganize elements into a new pattern or structure.

Vygostsky recognised the importance of a ‘More Knowledgeable Other’ in supporting reflection in learning through a Zone of Peripheral Development. Such an idea is reflected in the development of Personal Learning Networks, often utilising social software.

Yet the curricula issue remains – and especially the issue of how we combine and reorganise elements of learning into new patterns and structure without the support of formal curricula. This is the more so since traditional subject boundaries are breaking down. Present technology support for this process is very limited. Traditional hierarchical folder structures have been supplemented by keywords and with some effort learners may be able to develop their own taxonomies based on metadata. But the process remains difficult.

So – if we are to go down the path of developing new open architectures – my priority would be for an open architecture of curricula. Such a curricula would play a dual role in supporting self organised learning for individuals but also at the same time supporting emergent rhizomatic curricula at a social level.

 

Rethinking e-Portfolios

The second in my ‘Rethinking’ series of blog posts. This one – Rethinking e-portfolios’ is the notes for a forthcoming book chapter which I will post on the Wales wide Web when completed..

Several years ago, e-portfolios were the vogue in e-learning research and development circles. Yet today little is heard of them. Why? This is not an unimportant question. One of the failures of the e-learnng community is our tendency to move from one fad to the next, without ever properly examining what worked, what did not, and the reasons for it.

First of all it is important to note that  there was never a single understanding or approach to the development and purpose of an e-Portfolio. This can largely due be ascribed to different didactic and pedagogic approaches to e-Portfolio development and use. Some time ago I wrote that “it is possible to distinguish between three broad approaches: the use of e-Portfolios as an assessment tool, the use of e-Portfolios as a tool for professional or career development planning (CDP), and a wider understanding of e-Portfolios as a tool for active learning.”

In a paper presented at the e-Portfolio conference in Cambridge in 2005 (Attwell, 2005), I attempted to distinguish between the different process in e-Portfolio development and then examined the issue of ownership for each of these processes.

eport

The diagramme reveals not only ownership issues, but possibly contradictory purposes for an e-Portfolio. Is an e-Portfolio intended as a space for learners to record all their learning – that which takes place in the home or in the workplace as well as in a course environment or is it a place or responding to prescribed outcomes for a course or learning programme? How much should a e-Portfolio be considered a tool for assessment and how much for reflection on learning? Can tone environment encompass all of these functions?

These are essentially pedagogic issues. But, as always, they are reflected in e-learning technologies and applications. I worked for a whole on a project aiming to ‘repurpose the OSPI e-portfolio (later merged into Sakai) for use in adult education in the UK. It was almost impossible. The pedagogic use of the e-Portfolio, essentially o report against course outcomes – was hard coded into the software.

Lets look at another, and contrasting, e-Portfolio application, ELGG. Although now used as a social networking platform, in its original incarnation ELGG stared out as a social e-portfolio, originating in research undertaken by Dave Tosh on an e-portfolio project. ELGG essentially provided for students to blog within a social network with fine grained and easy to use access controls. All well and good: students were not restricted to course outcomes in their learning focus. But when it came to report on learning as part of any assessment process, ELGG could do little. There was an attempt to develop a ‘reporting’ plug in tool but that offered little more than the ability to favourite selected posts and accumulate them in one view.

Mahara is another popular open source ePortfolio tool. I have not actively played with Maraha for two years. Although still built around a blogging platform, Mahara incorporated a series of reporting tools, to allow students to present achievements. But it also was predicated on a (university) course and subject structure.

Early thinking around e-Portfolios failed to take into account the importance of feedback – or rather saw feedback as predominately as coming from teachers. The advent of social networking applications showed the power of the internet for what are now being called personal Learning networks, in other words to develop personal networks to share learning and share feedback. An application which merely allowed e-learners to develop their own records of learning, even if they could generate presentations, was clearly not enough.

But even if e-portfolios could be developed with social networking functionality, the tendency for institutionally based learning to regard the class group as the natural network, limited their use in practice. Furthermore the tendency, at least in the school sector, of limited network access in the mistaken name of e-safety once more limited the wider development of ‘social e-Portfolios.”

But perhaps the biggest problem has been around the issue of reflection. Champions have lauded e-portfolios as a natural tools to facilitate reflection on learning. Helen Barrett (2004) says an “electronic portfolio is a reflective tool that demonstrates growth over time.” Yet  are e-Portfolios effective in promoting reflection? And is it possible to introduce a reflective tool in an educations system that values the passing of exams through individual assessment over all else? Merely providing spaces for learners to record their learning, albeit in a discursive style does not automatically guarantee reflection. It may be that reflection involves discourse and tools for recording outcomes offer little in this regard.

I have been working for the last three years on developing a reflective e-Portfolio for a careers service based din the UK. The idea is to provide students an opportunity to research different career options and reflect on their preferences, desired choices and outcomes.

We looked very hard at existing opens source e-portfolios as the basis for the project, nut could not find any that met our needs. We eventually decided to develop an e-Portfolio based on WordPress – which we named Freefolio.

At a technical level Freefolio was part hack and part the development of a plug in. Technical developments included:

  • The ability to aggregate summaries of entries on a group basis
  • The ability add custom profiles to see profiles of peers
  • Enhanced group management
  • The ability to add blog entries based on predefined xml templates
  • More fine grained access controls
  • An enhanced workspace view

Much of this has been overtaken by subsequent releases of WordPress multi user and more recently Buddypress. But at the time Freefolio was good. However it did  not work in practice. Why? There were two reasons I think. Firstly, the e-Portfolio was only being used for careers lessons in school and that forms too little a part of the curriculum to build a critical mass of familiarity with users. And secondly, it was just too complex for many users. The split between the front end and the back end of WordPress confused users. The pedagogic purpose, as opposed to the functional use was too far apart. Why press on something called ‘new post’ to write about your career choices.

And, despite our attempts to allow users to select different templates, we had constant feedback that there was not enough ease of customisation in the appearance of the e-Portfolio.

In phase two of the project we developed a completely different approach. Rather than produce an overarching e-portfolip, we have developed a series of careers ‘games; to be accessed through the Careers company web site. Each of the six or so games, or mini applications we have developed so far encourages users to reflect on different aspects of their careers choices. Users are encouraged to rate different careers and to return later to review their choices. The site is yet to be rolled out but initial evaluations are promising.

I think there are lessons to be learnt from this. Small applications that encourage users to think are far better than comprehensive e-portfolios applications which try to do everything.

Interestingly, this view seems to have concur with that of CETIS. Simon Grant points out: “The concept of the personal learning environment could helpfully be more related to the e-portfolio (e-p), as both can help informal learning of skills, competence, etc., whether these abilities are formally defined or not.”

I would agree: I have previously seen both as related on a continuum, with differing foci but similar underpinning ideas. However I have always tended to view Personal Learning Environments as a pedagogic capproach, rather than an application. Despite this, there have been attempts to ‘build a PLE’. In that respect (and in relation to rethinking e-Portfolios) Scott Wilson’s views are interesting. Simon Grant says: “As Scott Wilson pointed out, it may be that the PLE concept overreached itself. Even to conceive of “a” system that supports personal learning in general is hazardous, as it invites people to design a “big” system in their own mind. Inevitably, such a “big” system is impractical, and the work on PLEs that was done between, say, 2000 and 2005 has now been taken forward in different ways — Scott’s work on widgets is a good example of enabling tools with a more limited scope, but which can be joined together as needed.”

Simon Grant goes on to say the ““thin portfolio” concept (borrowing from the prior “personal information aggregation and distribution service” concept) represents the idea that you don’t need that portfolio information in one server; but that it is very helpful to have one place where one can access all “your” information, and set permissions for others to view it. This concept is only beginning to be implemented.”

This is similar to the Mash Up Personal Learning Environment, being promoted in a number of European projects. Indeed a forthcoming paper by Fridolin Wild reports on research looking at the value of light weight widgets for promoting reflection that can be embedded in existing e-learning programmes. This is an interesting idea in suggesting that tools for developing an e-Portfolio )or for that matter, a PLE can be embedded in learning activities. This approach does not need to be restricted to formal school or university based learning courses. Widgets could easily be embedded in work based software (and work flow software) and our initial investigations of Work Oriented Personal Learning Environments (WOMBLES) has shown the potential of mobile devices for capturing informal and work based learning.

Of course, one of the big developments in software since the early e-Portfolio days has been the rise of web 2.0, social software and more recently cloud computing. There seems little point in us spending time and effort developing applications for students to share powerpoint presentations when we already have the admirable slideshare application. And for bookmarks, little can compete with Diigo. Most of these applications allow embedding so all work can be displayed in one place. Of course there is an issue as to the longevity of data on such sites (but then, we have the same issue with institutional e-Portfolios and I would always recommend that students retain a local copy of their work). Of course, not all students are confident in the use of such tools: a series of recent studies have blown apart the Digital Native (see for example Hargittai, E. (2010). Digital Na(t)ives? Variation in Internet Skills and Uses among Members of the “Net Generation”. Sociological Inquiry. 80(1):92-113).  And some commercial services may be more suitable than other for developing an e-Portfolio: Facebook has in my view limitations! But, somewhat ironically, cloud computing may be moving us nearer to Helen Barrett’s idea of an e-Portfolio. John Morrison recently gave a presentation (downloadable here) based on his study of ‘what aspects of identity as learners and understandings of ways to learn are shown by students who have been through a program using course-based networked learning?’ In discussing technology he looked at University as opposed to personally acquired, standalone as opposed to networked and Explored as opposed to ongoing use.

He found that students:

Did not rush to use new technology

Used face-to-face rather than technology, particularly in early brainstorming phases of a project

Tried out software and rejected that which was not meeting a need

Used a piece of software until another emerged which was better

Restrained the amount of software they used regularly to relatively few programs

Certain technologies were ignored and don’t appear to have been tried out by the students

Students used a piece of software until another emerged which was better  which John equates with change. Students restrained the amount of software they used regularly to relatively few programs  which he equates with conservatism

Whilst students were previously heavy users of Facebook, they were now abandoning it. And whilst there was little previous use of Google docs, his latest survey suggested that this cloud application was now being heavily used. This is important in that one of the more strange aspects of previous e0Portolio development has been the requirement for most students to upload attached files, produced in an off line work processor, to the e-Portfolio and present as a file attachment. But if students (no doubt partly driven by costs savings) are using online software for their written work, this may make it much easier to develop online e-portfolios.

John concluded that :this cohort lived through substantial technological change. They simplified and rationalized their learning tools. They rejected what was not functional, university technology and some self-acquired tools. They operate from an Acquisition model of learning.” He concluded that “Students can pick up and understand new ways to learn from networks. BUT… they generally don’t. They pick up what is intended.” (It is also well worth reading the discussion board around John’s presentation – - although you will need to be logged in to the Elesig Ning  site).

So – the e-Portfolio may have a new life. But what particularly interests me us the interplay between pedagogic ideas and applications and software opportunities and developments in providing that new potential life. And of course, we still have to solve that issue of control and ownership. And as John says, students pick up what is intended. If we continue to adhere to an acquisition model of learning, it will be hard to persuade students to develop reflective e-Portfolios. We should continue to rethink e-Portfolios through a widget based approach. But we have also to continue to rethink our models of education and learning.