1. Home
  2. HSC
  3. HSC Exams
  4. Pre-2016 HSC exam papers
  5. 2009 HSC Notes from the marking centre
  6. 2009 HSC Notes from the Marking Centre – Software Design and Development
Print this page Reduce font size Increase font size

2009 HSC Notes from the Marking Centre – Software Design and Development

Contents

Introduction

This document has been produced for the teachers and candidates of the Stage 6 course in Software Design and Development. It contains comments on candidate responses to the 2009 Higher School Certificate examination, indicating the quality of the responses and highlighting their relative strengths and weaknesses.

This document should be read along with the relevant syllabus, the 2009 Higher School Certificate examination, the marking guidelines and other support documents which have been developed by the Board of Studies to assist in the teaching and learning of Software Design and Development.

Teachers and students are advised that, in December 2008, the Board of Studies approved changes to the examination specifications and assessment requirements for a number of courses. These changes will be implemented for the 2010 HSC cohort. Information on a course-by-course basis is available on the Board’s website.

General comments

Teachers and candidates should be aware that examiners may ask questions that address the syllabus outcomes in a manner that requires candidates to respond by integrating their knowledge, understanding and skills developed through studying the course. It is important to understand that the Preliminary course is assumed knowledge for the HSC course.

Teachers and candidates should be aware that examiners may ask questions in Sections I and II that combine knowledge, skills and understandings from across the core of the HSC syllabus.

Candidates need to be aware that the marks allocated to the question and the answer space (where this is provided on the examination paper) are a guide to the length of the required response. A longer response will not in itself lead to higher marks. Writing far beyond the indicated space may reduce the time available for answering other questions.

Candidates need to be familiar with the Board’s Glossary of Key Words which contains some terms commonly used in examination questions. However, candidates should also be aware that not all questions will start with or contain one of the key words from the glossary. Questions such as ‘how?’, ‘why?’ or ‘to what extent?’ may be asked or verbs may be used which are not included in the glossary, such as ‘design’, ‘translate’ or ‘list’.

Section II

General comments

Many candidates showed a sound understanding of concepts but did not apply this knowledge appropriately, often giving general answers or answers not directly related to the particular situation described in the question. Candidates should note that if a scenario is provided in the question, then it should be referred to in their responses. Candidates should relate their knowledge of the concept being examined to the situation or system described in the question.

Question 21

    1. In the better responses, candidates discussed the need for a structured approach due to the costs and long timeline and then included the use of a prototype in the design stage to fully satisfy the scenario. These responses included a good description of the processes involved in the structured approach and full justification of that approach.

      Weaker responses spoke generically about an approach (usually prototyping) without giving details of its relationship to the scenario or justifying it beyond stating the reasons given in the question. Many students failed to provide details about the chosen software development approach they had chosen, therefore making it difficult to justify the approach in any detail.
    2. Responses indicated that candidates found this question more difficult to answer, as they had to understand what type of design tool was being described before they could discuss its associated benefits.

      Better responses described the use of an electronic tool to generate and edit structure diagrams together with the benefits of a large team using a structure diagram. They also discussed the benefits to the development team of having the data in electronic form and automatically updated as changes are made.

      Weaker responses provided a general discussion of the benefits of design tools but did not back it up with explicit reference to the scenario.

      Poorer responses indicated a lack of knowledge of the type of tool being described and merely repeated phrases from the question demonstrating little understanding of design tools.
    1. Weaker responses did not distinguish between technical, operational, budgetary or scheduling constraints. Many also simply identified the constraints without further discussion. Responses that dealt with a variety of constraints often discussed irrelevant issues beyond the scope of the question.
    2. Better responses provided detailed discussions addressing a broad variety of both social and ethical issues. These responses also dealt with issues surrounding the use of the hardware and implications of the software rather than simply describing general social issues related to access to the sporting/entertainment event.

      Weaker responses only outlined a single issue or simply identified one or two issues without any supporting discussion. Often weaker responses described simple social issues without a strong understanding of the implications of the hardware or the issues associated with access to confidential police files.
  1. In the better responses, candidates described multiple features related to performance issues. Appropriate issues included overall throughput of the system being tested, response times, and reliability tested by using a mix of transaction types.

    Weaker responses referred to testing features that would be used in earlier stages of development, such as code testing, debugging and interface design.
    1. Better responses referred to problems/failures with the planning process. These included inadequate planning so that the project failed to meet client requirements, or poor budget planning causing the project to be abandoned because it had run out of money.
      Weaker responses simply referred to possible problems with the finished program such as bugs, need for maintenance, and changes required due to technology changes in the market.
    2. Better responses referred to the consequences for users of using poorly designed and constructed software such as the delivered software still containing logic bugs causing system crashes and loss of valuable company data.

      Weaker responses often referred to consequences for the developers such as legal proceedings due to poor quality software or not attributing use of code from other sources.
    3. Better responses outlined a number of software developer responsibilities and described how those responsibilities contributed to the success of the project.

      Weaker responses outlined only vague responsibilities and did not describe how they related to success, such as stating the responsibility of software developers to maintain copyright, or the need for developers to test software.

Question 22

    1. Common incorrect responses included data flow diagram, context diagram and structure chart.
    2. Stronger responses recognised that this tool is the only one that indentifies hardware or media used in the system. Most responses merely described the purpose of modelling tools in general.
    1. In better responses, candidates noted changes in all the variables including the array through all the iterations of the two loops using a table. Many candidates rewrote the value of each variable on a new line in the table after each change.

      A number of candidates followed the algorithm and variable changes but did not present them in a recognised desk check form.

      In weaker responses, candidates did not show all of the variables in the algorithm or merely included the index and element provided in the test data. Some did not perform all of the iterations or just indicated how a sort would re-order the data.
    2. In better responses, candidates correctly identified the sort used and stated features of the algorithm to justify their choice. The majority of candidates named the sort correctly and justified their choice.

      In weaker responses, candidates incorrectly identified the sort as a bubble sort and provided features of this sort, or attempted to provide vague references to the finding of a maximum in a bubble sort.
    3. Many responses indicated that candidates were able to correctly swap the elements of the array, passing the required parameters correctly. In the weaker responses, candidates did not handle the parameter passing appropriately.

      A significant number of candidates were unable to correctly swap two variables. Generally these poor responses tried to compare Names(current) and Names(current + 1), making no attempt to actually swap them. In a few responses, candidates did not recognise that the greater than symbol > referred to an alphabetical comparison of the names rather than to the length of the names.
    1. In many responses, candidates were not able to distinguish between a DFD and other graphical modelling tools. Candidates are reminded that they need to be familiar with the symbols that are used in DFDs. Candidates are also reminded that data does not flow directly from a data store to an external entity (or vice versa). All data must flow through a process.

      In better responses, candidates correctly used the appropriate symbols and included and named the multiple processes that were part of this system.

      In many weaker responses, candidates inappropriately used the symbols provided in the system flow chart in the earlier part of question 22 or those provided in the structure chart in the multiple-choice question.
    2. Most candidates were able to construct a data dictionary including at least some of the characteristics and the variables. Better responses showed all required variables. However, many did not recognise the ‘alert message’ as a data item.

Question 23

    1. Most candidates provided two distinct reasons for modifying code. Poorer responses used vague terms such as ‘to make it better’.
    2. The better responses made a clear link between specific types of documentation and how the features of that documentation aided in identifying sections of code to be modified. This often involved the use of examples of how the documentation could be used.

      There was some confusion between intrinsic and internal documentation. There was also some confusion about error tracking or automated debugging techniques and documentation.

      Weaker responses simply identified or described, rather than explaining the link between documentation and modifying code.
    1. Better responses provided a clear statement about the location and nature of the error. They suggested that the second pre-test loop could be replaced with a post-test loop with the condition UNTIL touchpad = TRUE.

      Mid-range responses often identified the correct location for the error (the second pre-test loop). However, they incorrectly stated that the error was caused by swimmers who rested their hand on the touchpad at the end of the race, receiving an inaccurate time because swimmer_time would continue to update.
    2. While many responses correctly recognised that the swimmer’s start time was initialised at –1.000, they did not state explicitly that the swimmer_start_time variable would still hold this value at the end of the race. The better responses related this to the fact that startBlockOccupied would be FALSE when line 210 is executed.
    3. Candidates are reminded that line numbers are provided in the algorithm to make it easy to refer to specific lines of code. When asked to modify an algorithm, candidates may use the given line numbers as a framework. For example, it is not necessary to say ‘between lines 260 and 270 add …’, rather candidates may simply say ‘add at line 265 …’

      In better responses, candidates explicitly indicated the line number area where the code was to be inserted and provided the correct Display swimmer_time statement.

      In weaker responses, candidates used the correct display statement but had this located in the wrong section of code, usually between lines 250 and 260. Alternatively, they correctly located the print statement at either line 265 or line 145, but used an incorrect variable such as print GetTimerValue.

      Some weaker responses attempted to correct the error from part (i).
    1. Candidates are reminded that they need to be familiar with terminology that is appropriate to this course. Many responses did not correctly identify the data structure as an array of records.
    2. Better responses clearly described all aspects of the impact of the errors with the best responses also providing possible solutions to fix the error.

      Weaker responses just quoted a line number, which in itself does not constitute an error. Rather, an error is usually due to a combination of missing or incorrect statements. In some responses, candidates incorrectly attempted to discuss the layout of the code or missing ELSE statements as the cause of the error.

      Candidates should be familiar with working with arrays whose indexes start from either 0 or 1. In some responses, candidates incorrectly stated that the error was caused by the array being indexed from 0 rather than 1.
    3. In better responses, candidates realised that there were four segments comprising the required sub program – Initialisation, Counting Organisms, Find Max and Print. These segments were based on standard algorithms with which candidates should be familiar. In weaker responses, routines were named or referred to, but then no further detail was provided for these routines.

      In better responses, candidates used pseudocode to write the subprogram using correct control structures and meaningful variable names. Responses in which flow charts were used often provided incorrect control structures.

Section III

Question 24 – Evolution of programming languages

    1. Most candidates correctly identified a method.
    2. In better responses, candidates correctly defined the TEACHER subclass with three private attributes, replicating the code layout provided. Weaker responses attempted to define the TEACHER subclass using an incorrect format or did not define all attributes as private.
    3. In better responses, candidates explained the importance of inheritance, outlining in their answers the concept of how inheritance works and linking this with examples from the question. They then identified advantage(s) provided by inheritance.

      Mid-range responses neglected to outline the advantages of inheritance or did not support their argument with examples from the segment of code. Weaker responses presented an explanation of inheritance in general terms with no link to the segment of code and without any explanation of why inheritance is so useful.
    1. Most candidates correctly identified the paradigm as functional. Better responses justified their decision linking parts of the code explicitly to a feature of the function paradigm, by saying that T3 contained nested functions where it called the function T1 and T2 within it.
    2. Better responses correctly calculated the answer, showing all working. Weaker responses failed to evaluate how a function returns a result or failed to use this result in other subsequent functions.
    1. Better responses outlined the term ‘programmer productivity’ through the identification of two or more qualities that influence a programmer’s productivity. In weaker responses, candidates often referred only to the speed of writing code as the sole factor in describing a programmer’s productivity.
    2. Better responses provided a number of reasons for the development of different paradigms, using examples to illustrate how each reason was linked to the development of particular paradigms. In mid-range responses, candidates provided the description for only one reason, or just identified general reasons for development.
  1. In better responses, candidates selected the logic paradigm and provided detailed justification for their choice, linking the development of a solution to the software program with the correct characteristics and features of the logic paradigm, such as facts, rules, knowledge base, goals and forward chaining.

    In mid-range responses, candidates identified a paradigm and provided superficial links between the problem and some of the features of the selected paradigm. Many of the responses in which the OOP paradigm was selected, did not attempt to link the features and characteristics of OOP to the solution required and only briefly mentioned the concept of storing data in a car class.

Question 25 – The software developer’s view of the hardware

    1. Better responses clearly showed the place value of each of the binary digits. Other responses correctly converted the decimal value 9 into binary using division by 2.
    2. Many candidates had difficulty performing a binary addition using two’s complement. Better responses clearly labelled the conversion process and recognised that only the negative number (-2) needed to be converted into its two’s complement representation. Weaker responses often mixed up sign-modulus and complements or attempted to perform a binary subtraction.
    3. Most responses demonstrated some understanding of floating point representation. Despite being specifically asked for examples, weaker responses provided simplistic examples or did not demonstrate understanding of the characteristics of floating point numbers. Stronger responses included worked examples of floating point representation using the IEE 754 notation and a discussion of real-world examples of its use.
    1. Stronger responses clearly identified and interpreted only the L/R and F/B data section of the data stream. Weaker responses attempted to interpret each 2-bit sequence in the data stream (header, data and trailer) as separate instructions.
    2. This question required students to modify a data stream to transmit additional data. Stronger responses recognised the possibility of using the unused bits in the data section of the data stream and identified the need to add bits to represent the speed for both axes. Stronger responses also showed binary variations for the new data and how it could relate to speed. Weaker responses did not clearly identify the location of the speed data in the data stream or did not identify the need to include data for both axes.
    1. Stronger responses clearly identified the main purpose of a flip-flop as its ability to store 1 bit of data. Weaker responses did not clearly identify that only 1 bit of data is stored in a flip-flop, instead using vague terms like ‘store data’.
    2. Some responses indicated confusion between parts (c)(i) and (c)(ii) as they did not clearly demonstrate the difference between ‘purpose’ and ‘operation’. Stronger responses included a diagram of a flip-flop using NAND or NOR gates with a relevant truth table for this circuit. They went on to clearly describe the operation of the flip-flop, recognising the Set and Reset states, memory state, complementary nature of outputs and relevance of the crossover (or feedback) included in the circuit. Weaker responses used terms like ‘flipping’ which demonstrated a superficial understanding of the operation of the flip-flop.
  1. In better responses, candidates accurately constructed a truth table including the intermediate states. They then used this truth table to simplify the logic circuit by creating a second circuit for that same truth table. The new circuit design could be justified by creating a second truth table for the new circuit for comparison.

    Mid-level responses did not construct a full truth table for their simplified logic circuit, failing to clearly justify their choice of circuit.

    Weaker responses incorrectly identified the logic for the circuit provided or had difficulty simplifying the circuit. The weakest responses demonstrated little knowledge of how gates are correctly linked together.

2010068

Print this page Reduce font size Increase font size