Rss

Archives for : objc

Speak to me…

So, one of Keagan’s favorite things at school is this text-to-speech on the special ed computers. It’s meant for seriously autistic kids who don’t speak (not one of Keagan’s problems), but then again, what kid doesn’t like text-to-speech? When I was helping teach computer camp courses on the TI 99-4/A during high school summer breaks, the easiest, highest-engagement part of the week was always the morning spent doing little one-off, two-line BASIC programs to use the 99’s Speech Synthesizer.

I went looking at our existing apps to see if it would be easy enough to tell Keagan to just type some text into Mariner Write and then do Edit -> Sound -> Speak All Text, but that seemed obtuse. I figured it wouldn’t be that hard to whip up my own trivial text-to-speech app for him.

It wasn’t. In fact, it took about 30 minutes, during which I also had to repeatedly get up and help Quinn color in Jakers in a little Flash app.

Anyways, the implementation file is like 50 lines total, so I suppose I can just dump it in here:

//
//  KeagySpeechController.m
//  KeagySpeech1
//
//  Created by Chris Adamson on 2/5/08.
//  Copyright 2008 Subsequently & Furthermore, Inc. All rights reserved.
//

#import "KeagySpeechController.h"

@implementation KeagySpeechController
- (void) handleSpeak:(id)sender {
	NSString *text = (NSString*) [[textView textStorage] string];
	[synth startSpeakingString: text];
	[stopButton setEnabled: YES];
	[speakButton setEnabled: NO];
	
}

- (void) handleStop:(id)sender {
	[synth stopSpeaking];
}


- (void)awakeFromNib {
	synth = [[NSSpeechSynthesizer alloc] init];
	[synth setDelegate: self];
	[speakButton setEnabled: YES];
	[stopButton setEnabled: NO];
	// select all text in the NSTextArea so it's typed over by default
	NSString *text = (NSString*) [[textView textStorage] string];
	NSRange textRange;
	textRange.location = 0;
	textRange.length = [text length];
	[textView setSelectedRange: textRange];
}

// NSSpeechSynthesizer delegate methods
- (void)speechSynthesizer:(NSSpeechSynthesizer *)sender didFinishSpeaking:
		(BOOL)finishedSpeaking {
	[speakButton setEnabled: YES];
	[stopButton setEnabled: NO];
}

// no need to provide implementations of these:
// - (void)speechSynthesizer:willSpeakWord:ofString:
// - (void)speechSynthesizer:WillSpeakPhoneme:

@end

Screenshot:
KeagySpeech1 screenshot

App icon:
KeagySpeech1 app icon

Links:

A few thoughts:

  • It was pretty easy because I didn’t need to stray far from the simplest examples shown in Apple’s Introduction to Speech
  • Handling the “speech finished” is a nice simple example of delegates. In Java, you’d have some interface to implement (like SpeechSynthesisListener) and you’d be obligated to implement all of its methods, no-op’ing the ones that are irrelevant to you. In Obj-C, you can just ignore the methods that don’t interest you, and rely on the right method being found at runtime. I suppose the downside is that if you misspell the callback method, nothing in the compiler is going to catch your mistake, and you’ll be left wondering why your method isn’t being called.
  • It’s good exercise to periodically start new XCode projects and wire up in IB. With the changes in Leopard, I sometimes forget that instead of creating the class in IB, I need to create it in XCode (at least the outlets and actions in the header), and then create a new NSObject in IB, setting its class to whatever I just created in XCode.
  • One thing I don’t like, coming from Java, is having to switch back and forth between OO and straight-C idioms. NSRange is not an object, it’s just a struct, so initializing it with values to set the NSTextField selection looks and feels completely different from working with real Obj-C objects. I understand why it’s this way — anything straight-C is also suitable for C++ and Carbon — and it’s probably pretty comfortable for those coming from C/C++ backgrounds But compared to Java’s syntax, it does feel kludgy.
  • That said, whipping up the GUI in IB is still unparalleled. The Matisse editor in NetBeans is quite good, and probably the only practical choice for building Java GUIs, but it’s still not nearly as nice.

Anyways, I gotta get the kids to bed in 15 minutes, so I hope I covered everything… guess we’ll find out tomorrow if Keagan’s actually willing to use it.

Reversing the XCode / Interface Builder relationship

A while back, when Leopard was still NDA, I complained that stuff has moved. Having never gone back and explained…

If you work from earlier XCode tutorials, you may well get thrown off by the fact that Interface Builder no longer has palettes or a “Classes” menu. The first you can do without, because the functionality has been replaced by the “Library” windoid. The second, though, requires that you change your whole workflow.

Many old guides would have you create a new NSObject subclass in Interface Builder, and then create an instance of it, all from IB’s Classes menu. Then you’d do “create classes for MyController” (or whatever), again from the Classes menu, and then go back to XCode to change all the ids to IBOutlets and IBActions. Then you could control-drag to associate widgets and controller methods. But, like I said, the new XCode doesn’t have a Classes menu, so how do you do all this?

Well, actually, you don’t use IB to create your model or controller classes anymore. The first I saw of this was in the QTKit Capture Programming Guide, whose Create the Project Using Xcode 3 section says:

This completes the first sequence of steps in your project. In the next sequence, you’ll move ahead to define actions and outlets in Xcode before working with Interface Builder. This may involve something of a paradigm shift in how you may be used to building and constructing an application with versions of Interface Builder prior to Interface Builder 3. Because you’ve already prototyped your QTKit still motion capture application, at least in rough form with a clearly defined data model, you can now determine which actions and outlets need to be implemented. In this case, you have a QTCaptureView object, which is a subclass of NSView, a QTMovieView object to display your captured frames and one button to record your captured media content and add each single frame to your QuickTime movie output.

In other words, the New World Order is to start by writing out a new class in XCode, hand-coding your outlets and actions:

#import <Cocoa/Cocoa.h>
#import <QTKit/QTKit.h>

@interface MyCaptureController : NSObject {
	IBOutlet QTCaptureView *capturePreviewView;
	IBOutlet NSPopUpButton *videoDevicePopUp;
	IBOutlet NSPopUpButton *audioDevicePopUp;
	IBOutlet NSTextField *fileNameField;
// ...etc.
}

When I saw this, I really didn’t particularly care for it, because I enjoy using IB as a means of mocking up the GUI, starting the process by thinking of the UI and then backing it up with code. That, I thought, was the Mac way. The QTKit tutorial implicitly does this, but by sketching out the GUI on paper, then coding the outlets and actions, and then going to IB to build the view and wire-up. Well, if I’m going to mock up, I can do it faster and more accurately by drag and dropping real widgets rather than writing it out on paper, sigh…

Still, after a couple tries, I got the hang of it. The thing I was missing was that after coding my controller or document in XCode, the key to getting it into IB is to drag an NSObject from the Library to the nib window, then use the inspector to assign it to the class you’ve created in XCode. Then you can wire up the outlets and actions. Actually, it was easier in the document-based application, since you don’t have to create your own controller object, as the MyDocument class serves this purpose, and has its own nib already in the project, so you just do your wiring there.

Actually, the QTKit Capture tutorial is a little more strident about “XCode first, then IB” than is perhaps necessary. The new Cocoa Application Tutorial shows a process by which you build the UI first in IB, then wire the UI to your models from the XCode side… see the Bridging the Model and View: The Controller section to walk through it.

So, it’s OK. I think everyone will get used to it and maybe like it better. But there are going to be lots of questions from people with old books and tutorials, looking for IB’s missing “Classes” menu.

Capturing from multiple devices with QTKit

Well, that came together a lot faster than I thought it would.

Recap: the old QuickTime SequenceGrabber limits you to capturing from one video device at a time. Leopard’s QTKit capture classes free you from this restriction, allowing you to capture, simultaneously, from however many devices you happen to have available.

Only thing is, the tutorial only shows how to work with a single device at one time. So it took a little experimenting getting this working.

As I mentioned in earlier installments, I decided to do an app where each window would have its own preview and would allow you to pick from the available devices. The QTCaptureView is associated with a single QTCaptureSession object, so I decided to do a document-based Cocoa application, with the MyDocument class responsible for delegating to the session, and handling UI events (i.e., acting as a controller).

I should still do a heck of a clean-up on the code, as there are a lot of commented-out false starts, and no effort whatsoever to manage memory properly. Actually, if QTKit capture is Leopard only, I could punt and turn on garbage collection. Anyways, while I’ll release the whole code once I add more features (capturing audio and recording to disk), here are the important parts, for the benefit of anyone who finds this via Google.

 

• awakeFromNib

- (void)awakeFromNib {
    NSLog(@"awakeFromNib!");

    // create session and find default device
    captureSession = [[QTCaptureSession alloc] init];
    NSLog(@"got QTCaptureSession %@", captureSession);
    defaultDevice = [self getDefaultCaptureDeviceForMediaType: QTMediaTypeVideo];
    
    // build video source popup and select default
    [self buildNamesToDevicesDictionaryForMediaType:QTMediaTypeVideo
                                      includeMuxDevices:YES];
    [videoDevicePopUp removeAllItems];
    [videoDevicePopUp addItemsWithTitles: [namesToDevicesDictionary allKeys]];
    [videoDevicePopUp selectItemWithTitle: defaultDeviceMenuTitle];

    // interesting: programmatic selection of popup doesn't send a selection event.
    // ok then...
    [self chooseVideoDevice: self];
}

When a new window is created, I create a new QTCaptureSession and look up all the available devices (see next section). I use this to populate the NSPopupMenu and select the default device. Actually, setting the current item in the menu programatically doesn’t call the IBAction like a user action would, so I have to call that method manually here.

 

• buildNamesToDevicesDictionaryForMediaType

- (void)buildNamesToDevicesDictionaryForMediaType: (NSString *)mediaType
                             includeMuxDevices:(BOOL)includeMux {
    if (namesToDevicesDictionary == nil) {
        namesToDevicesDictionary = [[NSMutableDictionary alloc] init];
    }
    [namesToDevicesDictionary removeAllObjects];
    
    // add default device first
    QTCaptureDevice *defaultDevice =
              [self getDefaultCaptureDeviceForMediaType: mediaType];
    // create an item called "Default (device_name)"
    defaultDeviceMenuTitle =
        [NSString stringWithFormat: @"Default (%@)", 
                  [defaultDevice localizedDisplayName]];
    [namesToDevicesDictionary setObject: defaultDevice 
                     forKey: defaultDeviceMenuTitle];
    
    // then find the rest
    NSArray* devicesWithMediaType =
              [QTCaptureDevice inputDevicesWithMediaType:mediaType];
    NSArray* devicesWithMuxType =
               [QTCaptureDevice inputDevicesWithMediaType:QTMediaTypeMuxed];
    NSMutableSet* devicesSet = 
               [NSMutableSet setWithArray: devicesWithMediaType];
    [devicesSet addObjectsFromArray: devicesWithMuxType];

    // add all devices from set to dictionary
    NSEnumerator *enumerator = [devicesSet objectEnumerator];
    id value;
    while ((value = [enumerator nextObject])) {
        QTCaptureDevice *device = (QTCaptureDevice*) value;
        [namesToDevicesDictionary setObject: device 
                     forKey: [device localizedDisplayName]];
    }

}

This method builds an NSDictionary mapping device names to QTCaptureDevices for every discovered device for a given media type (such as QTMediaTypeVideo). It optionally grabs “muxed” devices, those that send audio and video over the same stream. It adds the default device as an extra item in this menu. I should probably put the default device at the top of the list, and sort the rest of the devices alphabetically.

 

• getDefaultCaptureDeviceForMediaType

- (QTCaptureDevice*)getDefaultCaptureDeviceForMediaType: (NSString *)mediaType {
    NSLog (@"getDefaultCaptureDevice");
    // set up the default device
    QTCaptureDevice *foundDevice =
        [QTCaptureDevice defaultInputDeviceWithMediaType: mediaType];
    NSLog (@"got default device %@", foundDevice);
    // try for a muxed device (eg, a dv camcorder) if that was nil
    if (foundDevice == nil)
        foundDevice = [QTCaptureDevice defaultInputDeviceWithMediaType: 
                                      QTMediaTypeMuxed];
    return foundDevice;
}

This is a convenience method to get the default audio or video device, called by buildNamesToDevicesDictionaryForMediaType, above. Looking at it now, I suppose I should make the fall-back-to-muxed-devices block optional.

 

• chooseVideoDevice

- (IBAction)chooseVideoDevice:(id)sender {
    NSLog (@"chooseVideoDevice");

    // stop the session while we mess with it?
    [captureSession stopRunning];

    NSString* chosenDeviceName = [videoDevicePopUp titleOfSelectedItem];

    NSLog (@"choose device %@", chosenDeviceName);
    QTCaptureDevice *chosenDevice =
        (QTCaptureDevice*) [namesToDevicesDictionary objectForKey:
                   chosenDeviceName];
    if (chosenDevice == nil) {
        NSLog (@"couldn't find %@ in dictonary", chosenDeviceName);
        return;
    }
    NSLog (@"looked up device %@", chosenDevice);

    // remove any existing inputs
    NSEnumerator* inputEnum = [[captureSession inputs] objectEnumerator];
    NSLog (@"removing %d existing inputs", [[captureSession inputs] count]);
    id value;
    while ((value = [inputEnum nextObject])) {
        QTCaptureInput *input = (QTCaptureInput*) value;
        [captureSession removeInput: input];
    }

    // add an input for this device
    NSError** openError = nil;
    [chosenDevice open: openError];
    if (openError != nil) {
        NSLog (@"Can't open %@: %@", chosenDevice, openError);
        return;
    }
    
    QTCaptureDeviceInput* deviceInput = 
           [QTCaptureDeviceInput deviceInputWithDevice: chosenDevice];
    NSLog (@"Got an input");
    
    NSError** addInputError = nil;
    [captureSession addInput: deviceInput error: addInputError];
    NSLog (@"added input to session");

    // TODO: move to awakeFromNib?
    [capturePreviewView setCaptureSession: captureSession];
    NSLog (@"set session on view");

    // (re-)start session
    [captureSession startRunning];

Magic time. This is called initially from awakeFromNib with the default device, then again whenever the user makes a selection from the video device popup menu. It stops the QTCaptureSession temporarily (actually I need to take that out and see if it’s really necessary; I did it here because you had to stop the old SequenceGrabber when you dicked with it), removes any existing QTCaptureInputs, gets a new input from the selected device, connects that input to the session, and restarts the session.

 

And that’s about it

That’s pretty much all the code that matters in MyDocument.m This is surely buggy and needs review and testing before using it for anything more serious than screwing around: I’m completely careless about opening and closing devices, haven’t checked whether one session’s use of a device will break another’s, etc. And I’ve got debugging NSLog statements all over the place. I also noticed a couple of these in my log after device switches:

CoreImage: detected malformed affine matrix:
AFFINE [nan nan nan inf nan nan] ARGB_8

But it’s an experiment, not an ADC article. And it’s proven to me that QTKit’s multiple-camera support works, and really well.

 

Camera tour

As for the cameras used in this test, here’s a shot of my desk:

  1. A MacAlly IceCam – A USB 1.1 video capture device, one of the few that ships with drivers for Mac and Windows. Since it’s a low-speed device, the image quality and framerate are dreadful.
  2. An original Apple iSight, the external FireWire variety that you can’t buy anymore.
  3. A Logitech QuickCam for Notebooks Pro, a USB Video Class device that I blogged about on O’Reilly a while back.

Still working on multi-camera QTKit capture

I took a few hours Sunday to take my earlier QTKit capture experiment 2 and get a live camera-switch working. In particular, I want to have multiple cameras capturing at once, something QTKit can do and QuickTime can’t.

I realized that the typical simple Cocoa arrangement of a single window and a controller object wasn’t appropriate for what would need to be a multi-window app. To get multiple windows up and running quickly, I opted for a Cocoa document-based application, figuring each “document” will be a simple delegate to a QTCaptureSession object, and then removing everything relating to saving and loading the document from disk (since that’s not really the point here). I imagine the clean way to do this would be to manage my own windows with NSWindowControllers, but this got me up and running quickly.

To wit, I didn’t think the preview of the default device would work when I copied it over from the old code, since I didn’t think I’d written anything to actually find the default video device and wire it up to the QTCaptureView. But as it turns out, in my blind copying, I’d copied awakeFromNib, in which I was doing all this work, and it’s called every time you create a new window in a document-based Cocoa application. So, bonus: I got the basics up and running more or less for free.

However, wildly slapping code around has its limits, and my attempts to find other devices and get their QTCaptureOutputs’ connections wired up to the preview haven’t worked. I’m going to remove all that code and take a more deliberate approach, pairing each window’s session object only with the selected device.

Hopefully the next time I blog on this, I’ll have three windows from three different cams.

Oh, and then handle audio.

Touring QTKit capture devices

Working on QTKitCaptureExperiment2. The goal for this one is to have multiple windows, each potentially capturing from a different source (something QTKit can do and QuickTime’s old SequenceGrabber can’t). Once I got the device iteration working and populating the NSPopUpButton, I plugged in some of my cameras.

Turns out some combinations work, and some don’t. Here’s how things look with my FireWire iSight and a UVC Logitech cam:

QTKitCaptureExperiment2 with 2 video capture devices

Nifty, huh? Now plug in a Canon camcorder to the other FireWire port:

QTKitCaptureExperiment2 with 3 capture devices

I just got this sort of working, and it’s already past bedtime, so I haven’t had time to figure this out further… whether it’s just the ZR25, or whether it’s having two DV devices on the FireWire or what.

Here’s the relevant code for creating the NSDictionary that populates the popup menu:

- (void)buildNamesToDevicesDictionaryForMediaType: (NSString *)mediaType 
				includeMuxDevices:(BOOL)includeMux {
	if (namesToDevicesDictionary == nil) {
		namesToDevicesDictionary = [[NSMutableDictionary alloc] init];
	}
	[namesToDevicesDictionary removeAllObjects];
	
	// add default device first
	QTCaptureDevice *defaultDevice =
		[self getDefaultCaptureDeviceForMediaType: mediaType];
	// TODO: should be Default (device_name)
	NSString *defaultDeviceName =
		[NSString stringWithFormat: @"Default (%@)",
			[defaultDevice localizedDisplayName]];
	[namesToDevicesDictionary setObject: defaultDevice forKey: defaultDeviceName];
	
	// then find the rest
	NSArray* devicesWithMediaType =
		[QTCaptureDevice inputDevicesWithMediaType:mediaType];
	NSArray* devicesWithMuxType =
		[QTCaptureDevice inputDevicesWithMediaType:QTMediaTypeMuxed];
	NSMutableSet* devicesSet = [NSMutableSet setWithArray: devicesWithMediaType];
	[devicesSet addObjectsFromArray: devicesWithMuxType];

	// add all devices from set to dictionary
	NSEnumerator *enumerator = [devicesSet objectEnumerator];
	id value;
	while ((value = [enumerator nextObject])) {
		QTCaptureDevice *device = (QTCaptureDevice*) value;
		[namesToDevicesDictionary setObject: device
				forKey: [device localizedDisplayName]];
	}

}


- (QTCaptureDevice*)getDefaultCaptureDeviceForMediaType: (NSString *)mediaType {
	NSLog (@"getDefaultCaptureDevice");
	// set up the default device
	QTCaptureDevice *foundDevice =
		[QTCaptureDevice defaultInputDeviceWithMediaType: mediaType];
	NSLog (@"got default device %@", foundDevice);
	// try for a muxed device (eg, a dv camcorder) if that was nil
	if (foundDevice == nil)
		foundDevice =
			[QTCaptureDevice defaultInputDeviceWithMediaType: QTMediaTypeMuxed];
	return foundDevice;
}

Update (10 minutes later) OK, the issue seems to be the iSight and the Canon camcorder fighting over the FireWire bus, I guess. Unplugging the iSight lets me see the others, and bring each up as the default device. Here’s the little notebook cam with a shot from the floor (laptop-size cable wasn’t long enough to get it in position for a decent shot):

QTKitCaptureExperiment2 with UVC laptop cam and camcorder

And here’s the view when just the camcorder is plugged in:

QTKitCaptureExperiment2 with only a camcorder plugged in

QTKit capture

My muddled-through QTKit capture application is beginning to work:

Screenshot of a Leopard QTKit experiment working for the first time
Click for full-size image

Actually, I wasn’t too far off in getting the preview up. There’s a QTCaptureSession object that I’d alloc‘ed but cluelessly failed to init (yeah, Obj-C is definitely a second language to me, still), and I’d overlooked the need to do a [captureSession startRunning];, which looks a lot like starting up an idler thread for the old SequenceGrabber, just much less ugly.

What unblocked me was five minutes with the “Building a Simple QTKit Capture Application” tutorial… I don’t see it on Apple’s website, but it’s part of the Leopard dev docs, under QuickTime > Conceptual > QTKitCaptureProgrammingGuide.

Oh, none of those buttons do anything yet – right now it’s just previewing the default device. What I aim to figure out is how to achieve QTKit’s huge advantage over old QuickTime capture: the ability to use multiple capture devices simulatanously. What I’m going to try is to have each window owning a QTCaptureSession, which is naturally associated with the QTCaptureView in the window. My concerns are how I’m going to do the device selection GUI (I just bought Aaron Hillegass’ Cocoa Programming for Mac OS X because I saw it covers sheets), and how to manage multiple windows (Interface Builder gives me the first window for free… later instances presumably need to be instantiated and made visible in code).

Still, not bad for banging my head cluelessly against the class documentation. Might be a pleasure to use once I know what I’m actually doing. If I really build this example out, it might be nice to pair an audio device with each window and do an animated level meter (assuming that QTKit has a method to do that, otherwise maybe there’s a way to use the straight-C QuickTime approach… except that’s for movies and not capture… still, it’s doable, I had capture-time level-metering in the QTJ book after all, just have to assume that QTKit capture will let you escape to regular QT if you really need to, like the movie playback/editing parts of QTKit do).

Anyways, seeing the preview work made my weekend. And after all the hackery and misery of juggling Java and native threads in Keaton (one reason I rarely hack on it), coding Obj-C qua Obj-C was really pleasant this time.

Stuff has moved

I’m in XCode in Leopard, messing around with the new QTKit before Leopard’s big launch on Friday. Everything is still NDA of course, which makes it hard to find answers to questions other than those provided by the sparse pre-release documentation. When I do get a Google hit, it’s usually someone on the archive of Apple mailing lists saying “don’t talk about NDA stuff on a public list.”

After this first hour or two, I’m actually still battling with XCode and Interface Builder and not truly coding yet. I’ve finally got a simple interface built and outlets and actions wired up, and the no-op code compiling. Yeah, took 90 minutes, sigh. Four more days of NDA, but let’s just say this about XCode and IB for now: stuff has moved…

Threads on the head

So at some point, I need to put some cycles in on Keaton, a project I started and have horribly neglected, the point of which is to provide a Java-to-QTKit wrapper. And for those of you who are new here, QTKit is a Cocoa/Objective-C binding to a useful subset of QuickTime functionality.

I have a small bit of useful functionality, enough to open a movie from a file or URL, get it into an AWT component, and play it. But one thing I can’t do right now is any kind of getter or setter methods.

Here’s the problem I need to feel comfortable with before my next programming push: if you’re going to mess with the Cocoa-side QTMovie object, you generally need to do so from AppKit’s main thread. In fact, creating a QTMovie from any other thread throws a runtime exception. Swing developers may be reminded of the rules about only touching Swing GUI’s from the AWT event-dispatch thread.

Given that, here’s the problem. Let’s say I have a Java method that does a “get” from a QTKit object. For example, think about the Java method QTMovie.getDuration(), which wraps a native call to [QTMovie duration]. The Java thread will make the native call, but since that native call can’t mess with the movie on that thread, it needs to use the AppKit main thread. So it does performSelectorOnMainThread, passing in a pointer to some code that actually does a [QTMovie duration] call.

Problem is, having passed off the call to AppKit’s main thread, my native function returns (without receiving a value, because of course the real value is being retrieved on another thread), which in turn returns no meaningful value to the Java method.

Somehow, the Java method needs to wait for a value to be provided to it. OK, fine, the Java methods are synchronized and do an Object.wait() to block the calling thread immediately after the JNI call. Presumably, the native code then needs to keep track of the calling object (provided by the JNI boilerplate), hang on to it through the performSelctorOnMainThread, and then send the return value back to Java by just setting an instance-local variable on the appropriate Java object, and calling Object.notify() to unblock the Java thread that initiated the call, and inform it that it can pick up its return value from wherever the native call left it.

Thinking this through, I think it may work, but I’m not at all comfortable that it’s thread-safe. Is it possible (esp. on multi-core) for the native stuff to all take place between the Java thread calling the JNI function and the Object.wait()? If so, the notify() will be useless and the subsequent wait() will deadlock. And what if multiple threads use the same object? When the first call hits wait(), it’ll release the object lock and a second thread could enter the synchronized method… right?

Possible I’m just making excuses to not take a few hours and bang it out and see how far I get. After all, it’s highly likely the only callers will be on AWT-event-dispatch, so even if the code is thread-unsafe at first, it might not matter in practice.