HM Brainstorm


Brainstorm by Nick Youngson CC BY-SA 3.0 Alpha Stock Images


Last fall/winter Humanitarian Makers members participated in an Item Testing pilot that opened items designed in the field (focusing on a curated list of Field Ready’s items) to testing by the members around the world. We had shared about that experience after the pilot and you can read about it here. One initiative that spun out of this effort was an informal, digital brainstorm among three of the members who participated in the testing. The brainstorm was focused on a perceived key question that came out of the pilot:


How might humanitarian makers advance item design and development at the post-testing stage?


The three individuals invited to delve into this question were:

Jonathan Rushton (Made In...Shoes / Trillium - UK) Kobina Abakah-Paintsil (Klaks3D - Ghana) Tommy Sebastian (MIT - USA)


The brainstorming conversation was kept small in attempt to keep the dialog rich and efficient. These three were selected because of their initiative with the testing, vision for doing this better, and shared goals with Humanitarian Makers. Naiomi Lundman facilitated the brainstorm. Many thanks to these three for their efforts and time to delve into this.


This question was asked within this context:


HM and Field Ready piloted community engagement on the item testing & feedback piece. A next step was to have intentional thought on the feasibility of community engagement at the next item development step: evaluating testing feedback and developing the next iterations. Process visual here (thanks to Field Ready).


A main challenge: remote item development is several steps away from understanding the initial problem-statement; and, as such, the next iteration developed remotely may not be suitable for the initial intent. (This may be okay?)


The ultimate goal: shared items are usable and reliable (quality) in the humanitarian context [achieve the 4A target of accessible, affordable, appropriate, available].


Brainstorm Summary:

  1. version control software is key yet a challenge

  2. flexibility with file revisions is important

  3. multi production process application desired (ex CNC apps)

  4. cloud based offers lower barrier to use, provided have good internet access

  5. integrated simulation is desirable

  6. easy to use interface is attractive

  7. Testing feedback needs to account for the variability in materials and production technologies

  8. Testing needs to be functional, inclusive of qualitative assessments and of load testing

  9. Load testing may be possible by testing intended use (ex. IV Hook tested by holding an IV bag)

  10. How do this in a representative and useful way? (ex. am I worried about cyclic failure of a part if it only needs to last a little while?)

  11. Balancing complexity v. simplicity may always be a challenge (to what length do we test?)

  12. To enable wide production and use, hobbyist level CNC manufacturing methods may need to be utilized predominately

  13. Sharing source files is critical

  14. Co-development will enable stakeholders to determine the appropriate testing as well as negate the communication of risk

Photo by: Monique Randolph


If you’d like to read the full brainstorming (completed via email), it is shared below. Have additional thoughts and insights you’d like to share? Please do so in the comment section.


****

TS: Version control of parts is a hard challenge, practically with fabrication possible using a wide range of materials and fabrication technologies. Feedback needs to account for this somehow, I think. Testing needs to be functional in nature as well… along with qualitative assessments, it seems to be that load testing (for many of the parts of interest) is necessary… that said, it may not be possible for many to execute.


KA: With version control I'll suggest the use GrabCad (https://grabcad.com/workbench/). It will be able to offer flexibility on file revisions etc. If all files are available there, different revisions for different technologies can be saved there without losing the original files.

For the load testing, I think much of the load testing that can be undertaken by most people is using the files for their actual purposes. For example, IV bags for testing IV hooks. Perhaps we can also look into getting some common artifacts that can be used as loads for testing.


NL: Thanks for getting this ball rolling you two. Kindly, allow me to chime in with some questions in response to your comments.

Kobina - there's an analysis we did of many different file sharing platforms. You're welcome to take a look through if you'd like, considering your interest in GrabCad. (I've shared the links below) Also, how do you see people you work with in Ghana use online file sharing platforms? Is this something they easily access and utilize currently?

Considering the points Tommy raised on load testing, what type of artifacts are you thinking Kobina?

Tommy, why do you think many may not be able to execute load testing?

There is a thought that testing equipment itself can be made open online. Feasible for many to make and calibrate their testing machines so that there's some standardization or reliability in testing achieved? I imagine the robustness of testing depends on the item purpose and risks. How might this be navigated or communicated?

As noted above, here is the documentation we have on digital collaboration type platforms for open hardware related functions.

1. Platform analysis spreadsheet

2. Criteria for determining "field readiness" (important because factors into what needs to be communicated on these platforms)

3. Specific platform/documentation reviews:

- Wevolver analysis

- Docubricks analysis