Week 7 & 8 summarised Reflections


Week 6 Pre-Session and In-Session Reflections:



Assignment 2.1 Group Work
As a group we have collaboratively decided to design a rating tool for the user comfort in public libraries. When we were trying to define our scope there was contention among certain group members where some wanted to include conventional parameters that related to the overall energy consumption of libraries (such as energy use, water, waste, etc.), and other wanted to limit the scope exclusively to parameters which defined user comfort such as aesthetics, biophilia, user-friendliness, furniture, etc. However, after some internal discussion we reached the consensus that we would limit our scope only to the parameters that directly influence user-friendliness, as conventional building performance criteria (such as water, waste, electricity), were already extensively covered my major rating tools such as LEED, GBCA, etc. Furthermore, the entire team worked collaboratively on designing this user comfort rating tool, we have named as "Clover." Clover consists of four different areas that affect our perception and comfort: aesthetics, sensory, comfort and friendliness.


Week 6 Pre-Session and In-Session Reflections:


I have downloaded the GBCA workbook and brought that with me to the foundation workshop.
The GBCA workshop was indeed very useful in understanding how a rating tool has developed over time, and in understanding how industry professionals and professional bodies have worked collaboratively to launch and establish this tool into what it has become today, a rating tool to which value and prestige is attributed.

The workshop really provided a great overview of the different criteria that is covered by the GBCA scoring system and what is required to achieve certain levels (e.g. 3 Star, 4 Star, 5 Star, etc.). It also helped me understand that a building which was rated say 5 Star ten years ago will be perform far worse than a building that was very recently rated as 5 stars. This also brings about the question of comparability, where although buildings were awarded the same stars their performance is different. This is due to the fact that the criteria and framework has been getting increasingly stringent over the years where the performance criteria has been raised. I have also learned that there is great different between a rating for "as designed", "as built" and "in operation." It is also obvious that the "in operation" rating should be the best performance indicator.

Comments

Popular posts from this blog